Legume genome evolution viewed through the Medicago truncatula and Lotus japonicus genomes
Cannon, Steven B.; Sterck, Lieven; Rombauts, Stephane; Sato, Shusei; Cheung, Foo; Gouzy, Jérôme; Wang, Xiaohong; Mudge, Joann; Vasdewani, Jayprakash; Schiex, Thomas; Spannagl, Manuel; Monaghan, Erin; Nicholson, Christine; Humphray, Sean J.; Schoof, Heiko; Mayer, Klaus F. X.; Rogers, Jane; Quétier, Francis; Oldroyd, Giles E.; Debellé, Frédéric; Cook, Douglas R.; Retzel, Ernest F.; Roe, Bruce A.; Town, Christopher D.; Tabata, Satoshi; Van de Peer, Yves; Young, Nevin D.
2006-01-01
Genome sequencing of the model legumes, Medicago truncatula and Lotus japonicus, provides an opportunity for large-scale sequence-based comparison of two genomes in the same plant family. Here we report synteny comparisons between these species, including details about chromosome relationships, large-scale synteny blocks, microsynteny within blocks, and genome regions lacking clear correspondence. The Lotus and Medicago genomes share a minimum of 10 large-scale synteny blocks, each with substantial collinearity and frequently extending the length of whole chromosome arms. The proportion of genes syntenic and collinear within each synteny block is relatively homogeneous. Medicago–Lotus comparisons also indicate similar and largely homogeneous gene densities, although gene-containing regions in Mt occupy 20–30% more space than Lj counterparts, primarily because of larger numbers of Mt retrotransposons. Because the interpretation of genome comparisons is complicated by large-scale genome duplications, we describe synteny, synonymous substitutions and phylogenetic analyses to identify and date a probable whole-genome duplication event. There is no direct evidence for any recent large-scale genome duplication in either Medicago or Lotus but instead a duplication predating speciation. Phylogenetic comparisons place this duplication within the Rosid I clade, clearly after the split between legumes and Salicaceae (poplar). PMID:17003129
Towards Productive Critique of Large-Scale Comparisons in Education
ERIC Educational Resources Information Center
Gorur, Radhika
2017-01-01
International large-scale assessments and comparisons (ILSAs) in education have become significant policy phenomena. How a country fares in these assessments has come to signify not only how a nation's education system is performing, but also its future prospects in a global economic "race". These assessments provoke passionate arguments…
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Flanagan, Helen E.; Perry, Adrienne; Freeman, Nancy L.
2012-01-01
File review data were used to explore the impact of a large-scale publicly funded Intensive Behavioral Intervention (IBI) program for young children with autism. Outcomes were compared for 61 children who received IBI and 61 individually matched children from a waitlist comparison group. In addition, predictors of better cognitive outcomes were…
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Wilcox, J. M.; Svalgaard, L.; Scherrer, P. H.; Mcintosh, P. S.
1977-01-01
Two methods of observing the neutral line of the large-scale photospheric magnetic field are compared: neutral line positions inferred from H-alpha photographs (McIntosh and Nolte, 1975) and observations of the photospheric magnetic field made with low spatial resolution (three minutes) and high sensitivity using the Stanford magnetograph. The comparison is found to be very favorable.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
ERIC Educational Resources Information Center
Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo
2017-01-01
This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
A geographic comparison of selected large-scale planetary surface features
NASA Technical Reports Server (NTRS)
Meszaros, S. P.
1984-01-01
Photographic and cartographic comparisons of geographic features on Mercury, the Moon, Earth, Mars, Ganymede, Callisto, Mimas, and Tethys are presented. Planetary structures caused by impacts, volcanism, tectonics, and other natural forces are included. Each feature is discussed individually and then those of similar origin are compared at the same scale.
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Operating Reserves and Wind Power Integration: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, M.; Donohoo, P.; Lew, D.
2010-10-01
This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
Stability of large-scale systems.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1972-01-01
The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.
Blazing Signature Filter: a library for fast pairwise similarity comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan
Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Unsteady loads due to propulsive lift configurations. Part A: Investigation of scaling laws
NASA Technical Reports Server (NTRS)
Morton, J. B.; Haviland, J. K.
1978-01-01
This study covered scaling laws, and pressure measurements made to determine details of the large scale jet structure and to verify scaling laws by direct comparison. The basis of comparison was a test facility at NASA Langley in which a JT-15D exhausted over a boilerplater airfoil surface to reproduce upper surface blowing conditions. A quarter scale model was built of this facility, using cold jets. A comparison between full scale and model pressure coefficient spectra, presented as functions of Strouhal numbers, showed fair agreement, however, a shift of spectral peaks was noted. This was not believed to be due to Mach number or Reynolds number effects, but did appear to be traceable to discrepancies in jet temperatures. A correction for jet temperature was then tried, similar to one used for far field noise prediction. This was found to correct the spectral peak discrepancy.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.
1998-01-01
Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.
Kuhlmann, Tim; Dantlgraber, Michael; Reips, Ulf-Dietrich
2017-12-01
Visual analogue scales (VASs) have shown superior measurement qualities in comparison to traditional Likert-type response scales in previous studies. The present study expands the comparison of response scales to properties of Internet-based personality scales in a within-subjects design. A sample of 879 participants filled out an online questionnaire measuring Conscientiousness, Excitement Seeking, and Narcissism. The questionnaire contained all instruments in both answer scale versions in a counterbalanced design. Results show comparable reliabilities, means, and SDs for the VAS versions of the original scales, in comparison to Likert-type scales. To assess the validity of the measurements, age and gender were used as criteria, because all three constructs have shown non-zero correlations with age and gender in previous research. Both response scales showed a high overlap and the proposed relationships with age and gender. The associations were largely identical, with the exception of an increase in explained variance when predicting age from the VAS version of Excitement Seeking (B10 = 1318.95, ΔR(2) = .025). VASs showed similar properties to Likert-type response scales in most cases.
Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data
The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
ERIC Educational Resources Information Center
van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf
2009-01-01
This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
On the large scale structure of X-ray background sources
NASA Technical Reports Server (NTRS)
Bi, H. G.; Meszaros, A.; Meszaros, P.
1991-01-01
The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.
Relationships among measures of managerial personality traits.
Miner, J B
1976-08-01
Comparisons were made to determine the degree of convergence among three measures associated with leadership success in large, hierarchic organizations in the business sector: the Miner Sentence Completion Scale; the Ghiselli Self-Description Inventory; and the F-Scale, Correlational analyses and comparisons between means were made using college students and business manager samples. The results indicated considerable convergence for the first two measures, but not for the F-Scale. The F-Scale was related to the Miner Sentence Completion Scale in the student group, but relationships were nonexistent among the managers. Analyses of the individual F-Scale items which produced the relationship among the students suggested that early family-related experiences and attitudes may contribute to the development of motivation to manage, but lose their relevance for it later, under the onslaught of actual managerial experience.
Connecting the large- and the small-scale magnetic fields of solar-like stars
NASA Astrophysics Data System (ADS)
Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.
2018-05-01
A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
NASA Astrophysics Data System (ADS)
Kröger, Knut; Creutzburg, Reiner
2013-05-01
The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.
Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A
2017-12-21
Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
2002-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.; Stone, C.M.; Krieg, R.D.
Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less
Groups of galaxies in the Center for Astrophysics redshift survey
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1989-01-01
By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.
Use of Second Generation Coated Conductors for Efficient Shielding of dc Magnetic Fields (Postprint)
2010-07-15
layer of superconducting film, can attenuate an external magnetic field of up to 5 mT by more than an order of magnitude. For comparison purposes...appears to be especially promising for the realization of large scale high-Tc superconducting screens. 15. SUBJECT TERMS magnetic screens, current...realization of large scale high-Tc superconducting screens. © 2010 American Institute of Physics. doi:10.1063/1.3459895 I. INTRODUCTION Magnetic screening
Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert Moulton
2001-01-01
The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...
Measuring the Large-scale Solar Magnetic Field
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.
2017-12-01
The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
NASA Technical Reports Server (NTRS)
Pyle, K. R.; Simpson, J. A.
1985-01-01
Near solar maximum, a series of large radial solar wind shocks in June and July 1982 provided a unique opportunity to study the solar modulation of galactic cosmic rays with an array of spacecraft widely separated both in heliocentric radius and longitude. By eliminating hysteresis effects it is possible to begin to separate radial and azimuthal effects in the outer heliosphere. On the large scale, changes in modulation (both the increasing and recovery phases) propagate outward at close to the solar wind velocity, except for the near-term effects of solar wind shocks, which may propagate at a significantly higher velocity. In the outer heliosphere, azimuthal effects are small in comparison with radial effects for large-scale modulation at solar maximum.
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
Mapping the universe in three dimensions
Haynes, Martha P.
1996-01-01
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714
Mapping the universe in three dimensions.
Haynes, M P
1996-12-10
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.
Bathymetric comparisons adjacent to the Louisiana barrier islands: Processes of large-scale change
List, J.H.; Jaffe, B.E.; Sallenger, A.H.; Hansen, M.E.
1997-01-01
This paper summarizes the results of a comparative bathymetric study encompassing 150 km of the Louisiana barrier-island coast. Bathymetric data surrounding the islands and extending to 12 m water depth were processed from three survey periods: the 1880s, the 1930s, and the 1980s. Digital comparisons between surveys show large-scale, coherent patterns of sea-floor erosion and accretion related to the rapid erosion and disintegration of the islands. Analysis of the sea-floor data reveals two primary processes driving this change: massive longshore transport, in the littoral zone and at shoreface depths; and increased sediment storage in ebb-tidal deltas. Relative sea-level rise, although extraordinarily high in the study area, is shown to be an indirect factor in causing the area's rapid shoreline retreat rates.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric cond...
NASA Astrophysics Data System (ADS)
Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen
2010-12-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Culture rather than genes provides greater scope for the evolution of large-scale human prosociality
Bell, Adrian V.; Richerson, Peter J.; McElreath, Richard
2009-01-01
Whether competition among large groups played an important role in human social evolution is dependent on how variation, whether cultural or genetic, is maintained between groups. Comparisons between genetic and cultural differentiation between neighboring groups show how natural selection on large groups is more plausible on cultural rather than genetic variation. PMID:19822753
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
ERIC Educational Resources Information Center
Akabayashi, Hideo; Nakamura, Ryosuke; Naoi, Michio; Shikishima, Chizuru
2016-01-01
In the past decades, income inequality has risen in most developed countries. There is growing interest among economists in international comparisons of economic and educational mobility. This is aided by the availability of internationally comparable, large-scale data. The present paper aims to make three contributions. First, we introduce the…
Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.
Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong
2017-10-11
The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.
Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem
Schneider, Davod C.; Piatt, John F.
1986-01-01
The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.
A comparative analysis of rawinsonde and NIMBUS 6 and TIROS N satellite profile data
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Carle, W. E.; Knight, K.; Moyer, V.; Cheng, N. M.
1981-01-01
Comparisons are made between rawinsonde and satellite profiles in seven areas for a wide range of surface and weather conditions. Variables considered include temperature, dewpoint temperature, thickness, precipitable water, lapse rate of temperature, stability, geopotential height, mixing ratio, wind direction, wind speed, and kinematic parameters, including vorticity and the advection of vorticity and temperature. In addition, comparisons are made in the form of cross sections and synoptic fields for selected variables. Sounding data from the NIMBUS 6 and TIROS N satellites were used. Geostrophic wind computed from smoothed geopotential heights provided large scale flow patterns that agreed well with the rawinsonde wind fields. Surface wind patterns as well as magnitudes computed by use of the log law to extrapolate wind to a height of 10 m agreed with observations. Results of this study demonstrate rather conclusively that satellite profile data can be used to determine characteristics of large scale systems but that small scale features, such as frontal zones, cannot yet be resolved.
Quantum probability, choice in large worlds, and the statistical structure of reality.
Ross, Don; Ladyman, James
2013-06-01
Classical probability models of incentive response are inadequate in "large worlds," where the dimensions of relative risk and the dimensions of similarity in outcome comparisons typically differ. Quantum probability models for choice in large worlds may be motivated pragmatically - there is no third theory - or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.
Dispersion in Fractures with Ramified Dissolution Patterns
NASA Astrophysics Data System (ADS)
Xu, Le; Marks, Benjy; Toussaint, Renaud; Flekkøy, Eirik G.; Måløy, Knut J.
2018-04-01
The injection of a reactive fluid into an open fracture may modify the fracture surface locally and create a ramified structure around the injection point. This structure will have a significant impact on the dispersion of the injected fluid due to increased permeability, which will introduce large velocity fluctuations into the fluid. Here, we have injected a fluorescent tracer fluid into a transparent artificial fracture with such a ramified structure. The transparency of the model makes it possible to follow the detailed dispersion of the tracer concentration. The experiments have been compared to two dimensional (2D) computer simulations which include both convective motion and molecular diffusion. A comparison was also performed between the dispersion from an initially ramified dissolution structure and the dispersion from an initially circular region. A significant difference was seen both at small and large length scales. At large length scales, the persistence of the anisotropy of the concentration distribution far from the ramified structure is discussed with reference to some theoretical considerations and comparison with simulations.
Chemical Processing of Electrons and Holes.
ERIC Educational Resources Information Center
Anderson, Timothy J.
1990-01-01
Presents a synopsis of four lectures given in an elective senior-level electronic material processing course to introduce solid state electronics. Provides comparisons of a large scale chemical processing plant and an integrated circuit. (YP)
A comparison of three methods for measuring local urban tree canopy cover
Kristen L. King; Dexter H. Locke
2013-01-01
Measurements of urban tree canopy cover are crucial for managing urban forests and required for the quantification of the benefits provided by trees. These types of data are increasingly used to secure funding and justify large-scale planting programs in urban areas. Comparisons of tree canopy measurement methods have been conducted before, but a rapidly evolving set...
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
This study presents a method for obtaining the true rms peculiar flow in the universe on scales up to 100-120/h Mpc using APM data as an input assuming only that peculiar motions are caused by peculiar gravity. The comparison to the local (Great Attractor) flow is expected to give clear information on the density parameter, Omega, and the local bias parameter, b. The observed peculiar flows in the Great Attractor region are found to be in better agreement with the open (Omega = 0.1) universe in which light traces mass (b = 1) than with a flat (Omega = 1) universe unless the bias parameter is unrealistically large (b is not less than 4). Constraints on Omega from a comparison of the APM and PV samples are discussed.
Stability of large-scale systems with stable and unstable subsystems.
NASA Technical Reports Server (NTRS)
Grujic, Lj. T.; Siljak, D. D.
1972-01-01
The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.
The Use of Weighted Graphs for Large-Scale Genome Analysis
Zhou, Fang; Toivonen, Hannu; King, Ross D.
2014-01-01
There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061
Similarity spectra analysis of high-performance jet aircraft noise.
Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M
2013-04-01
Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian
2016-01-01
A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
2014-10-10
The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less
NASA Astrophysics Data System (ADS)
Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.
2017-12-01
The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
Large-scale environments of narrow-line Seyfert 1 galaxies
NASA Astrophysics Data System (ADS)
Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.
2017-09-01
Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.
Large Scale Winter Time Disturbances in Meteor Winds over Central and Eastern Europe
NASA Technical Reports Server (NTRS)
Greisiger, K. M.; Portnyagin, Y. I.; Lysenko, I. A.
1984-01-01
Daily zonal wind data of the four pre-MAP-winters 1978/79 to 1981/82 obtained over Central Europe and Eastern Europe by the radar meteor method were studied. Available temperature and satellite radiance data of the middle and upper stratosphere were used for comparison, as well as wind data from Canada. The existence or nonexistence of coupling between the observed large scale zonal wind disturbances in the upper mesopause region (90 to 100 km) and corresponding events in the stratosphere are discussed.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng
1998-03-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.
John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan
2010-01-01
Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...
Comparison of WAIS-III Short Forms for Measuring Index and Full-Scale Scores
ERIC Educational Resources Information Center
Girard, Todd A.; Axelrod, Bradley N.; Wilkins, Leanne K.
2010-01-01
This investigation assessed the ability of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) short forms to estimate both index and IQ scores in a large, mixed clinical sample (N = 809). More specifically, a commonly used modification of Ward's seven-subtest short form (SF7-A), a recently proposed index-based SF7-C and eight-subtest…
The earth's foreshock, bow shock, and magnetosheath
NASA Technical Reports Server (NTRS)
Onsager, T. G.; Thomsen, M. F.
1991-01-01
Studies directly pertaining to the earth's foreshock, bow shock, and magnetosheath are reviewed, and some comparisons are made with data on other planets. Topics considered in detail include the electron foreshock, the ion foreshock, the quasi-parallel shock, the quasi-perpendicular shock, and the magnetosheath. Information discussed spans a broad range of disciplines, from large-scale macroscopic plasma phenomena to small-scale microphysical interactions.
Comparison of Observations of Sporadic-E Layers in the Nighttime and Daytime Mid-Latitude Ionosphere
NASA Technical Reports Server (NTRS)
Pfaff, R.; Freudenreich, H.; Rowland, D.; Klenzing, J.; Clemmons, J.; Larsen, M.; Kudeki, E.; Franke, S.; Urbina, J.; Bullett, T.
2012-01-01
A comparison of numerous rocket experiments to investigate mid-latitude sporadic-E layers is presented. Electric field and plasma density data gathered on sounding rockets launched in the presence of sporadic-E layers and QP radar echoes reveal a complex electrodynamics including both DC parameters and plasma waves detected over a large range of scales. We show both DC and wave electric fields and discuss their relationship to intense sporadic-E layers in both nighttime and daytime conditions. Where available, neutral wind observations provide the complete electrodynamic picture revealing an essential source of free energy that both sets up the layers and drives them unstable. Electric field data from the nighttime experiments reveal the presence of km-scale waves as well as well-defined packets of broadband (10's of meters to meters) irregularities. What is surprising is that in both the nighttime and daytime experiments, neither the large scale nor short scale waves appear to be distinctly organized by the sporadic-E density layer itself. The observations are discussed in the context of current theories regarding sporadic-E layer generation and quasi-periodic echoes.
Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V
2008-10-01
International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.
Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane
NASA Technical Reports Server (NTRS)
Syberg, J.; Koncsek, J. L.
1977-01-01
Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.
Supersonic jet noise generated by large scale instabilities
NASA Technical Reports Server (NTRS)
Seiner, J. M.; Mclaughlin, D. K.; Liu, C. H.
1982-01-01
The role of large scale wavelike structures as the major mechanism for supersonic jet noise emission is examined. With the use of aerodynamic and acoustic data for low Reynolds number, supersonic jets at and below 70 thousand comparisons are made with flow fluctuation and acoustic measurements in high Reynolds number, supersonic jets. These comparisons show that a similar physical mechanism governs the generation of sound emitted in he principal noise direction. These experimental data are further compared with a linear instability theory whose prediction for the axial location of peak wave amplitude agrees satisfactorily with measured phased averaged flow fluctuation data in the low Reynolds number jets. The agreement between theory and experiment in the high Reynolds number flow differs as to the axial location for peak flow fluctuations and predicts an apparent origin for sound emission far upstream of the measured acoustic data.
NASA Astrophysics Data System (ADS)
Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert
2013-10-01
The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boore, Jeffrey L.
2004-11-27
Although the phylogenetic relationships of many organisms have been convincingly resolved by the comparisons of nucleotide or amino acid sequences, others have remained equivocal despite great effort. Now that large-scale genome sequencing projects are sampling many lineages, it is becoming feasible to compare large data sets of genome-level features and to develop this as a tool for phylogenetic reconstruction that has advantages over conventional sequence comparisons. Although it is unlikely that these will address a large number of evolutionary branch points across the broad tree of life due to the infeasibility of such sampling, they have great potential for convincinglymore » resolving many critical, contested relationships for which no other data seems promising. However, it is important that we recognize potential pitfalls, establish reasonable standards for acceptance, and employ rigorous methodology to guard against a return to earlier days of scenario-driven evolutionary reconstructions.« less
Comparison of concentric needle versus hooked-wire electrodes in the canine larynx.
Jaffe, D M; Solomon, N P; Robinson, R A; Hoffman, H T; Luschei, E S
1998-05-01
The use of a specific electrode type in laryngeal electromyography has not been standardized. Laryngeal electromyography is usually performed with hooked-wire electrodes or concentric needle electrodes. Hooked-wire electrodes have the advantage of allowing laryngeal movement with ease and comfort, whereas the concentric needle electrodes have benefits from a technical aspect and may be advanced, withdrawn, or redirected during attempts to appropriately place the electrode. This study examines whether hooked-wire electrodes permit more stable recordings than standard concentric needle electrodes at rest and after large-scale movements of the larynx and surrounding structures. A histologic comparison of tissue injury resulting from placement and removal of the two electrode types is also made by evaluation of the vocal folds. Electrodes were percutaneously placed into the thyroarytenoid muscles of 10 adult canines. Amplitude of electromyographic activity was measured and compared during vagal stimulation before and after large-scale laryngeal movements. Signal consistency over time was examined. Animals were killed and vocal fold injury was graded and compared histologically. Waveform morphology did not consistently differ between electrode types. The variability of electromyographic amplitude was greater for the hooked-wire electrode (p < 0.05), whereas the mean amplitude measures before and after large-scale laryngeal movements did not differ (p > 0.05). Inflammatory responses and hematoma formation were also similar. Waveform morphology of electromyographic signals registered from both electrode types show similar complex action potentials. There is no difference between the hooked-wire electrode and the concentric needle electrode in terms of electrode stability or vocal fold injury in the thyroarytenoid muscle after large-scale laryngeal movements.
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Accelerating large-scale protein structure alignments with graphics processing units
2012-01-01
Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132
Shape Memory Alloys for Vibration Isolation and Damping of Large-Scale Space Structures
2010-08-04
Portugal (2007) Figure 24 – Comparison of martensitic SMA with steel in sine upsweep 3.2.2.4 Dwell Test Comparison with Sine Sweep Results...International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES), Porto, Portugal (2007) † Lammering, Rolf...a unique jump in amplitude during a sine sweep if sufficient pre- stretch is applied. These results were significant, but investigation of more
Data for Room Fire Model Comparisons
Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121
Data for Room Fire Model Comparisons.
Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.
Large eddy simulations of compressible magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Grete, Philipp
2017-02-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
Development of analog watch with minute repeater
NASA Astrophysics Data System (ADS)
Okigami, Tomio; Aoyama, Shigeru; Osa, Takashi; Igarashi, Kiyotaka; Ikegami, Tomomi
A complementary metal oxide semiconductor with large scale integration was developed for an electronic minute repeater. It is equipped with the synthetic struck sound circuit to generate natural struck sound necessary for the minute repeater. This circuit consists of an envelope curve drawing circuit, frequency mixer, polyphonic mixer, and booster circuit made by using analog circuit technology. This large scale integration is a single chip microcomputer with motor drivers and input ports in addition to the synthetic struck sound circuit, and it is possible to make an electronic system of minute repeater at a very low cost in comparison with the conventional type.
NASA Technical Reports Server (NTRS)
Dittmer, P. H.; Scherrer, P. H.; Wilcox, J. M.
1978-01-01
The large-scale solar velocity field has been measured over an aperture of radius 0.8 solar radii on 121 days between April and September, 1976. Measurements are made in the line Fe I 5123.730 A, employing a velocity subtraction technique similar to that of Severny et al. (1976). Comparisons of the amplitude and frequency of the five-minute resonant oscillation with the geomagnetic C9 index and magnetic sector boundaries show no evidence of any relationship between the oscillations and coronal holes or sector structure.
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Giusarma, Elena
2017-10-01
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R
2008-01-01
A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.
Single-trabecula building block for large-scale finite element models of cancellous bone.
Dagan, D; Be'ery, M; Gefen, A
2004-07-01
Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.
Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations
NASA Technical Reports Server (NTRS)
Kaiser, N.; Davis, M.
1985-01-01
It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
Deployment dynamics and control of large-scale flexible solar array system with deployable mast
NASA Astrophysics Data System (ADS)
Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping
2016-10-01
In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, C.E.; Bass, B.R.; Keeney, J.A.
This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasismore » was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.« less
NASA Astrophysics Data System (ADS)
Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua
2015-12-01
Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.
Large-scale Graph Computation on Just a PC
2014-05-01
edges for several vertices simultaneously). We compared the performance of GraphChi-DB to Neo4j using their Java API (we discuss MySQL comparison in the...75 4.7.6 Comparison to RDBMS ( MySQL ) . . . . . . . . . . . . . . . . . . . . . 75 4.7.7 Summary of the...Windows method, GraphChi. The C++ implementation has circa 8,000 lines of code. We have also de- veloped a Java -version of GraphChi, but it does not
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
Three-Year Evaluation of a Large Scale Early Grade French Immersion Program: The Ottawa Study
ERIC Educational Resources Information Center
Barik, Henri; Swain, Marrill
1975-01-01
The school performance of pupils in grades K-2 of the French immersion program in operation in Ottawa public schools is evaluated in comparison with that of pupils in the regular English program. (Author/RM)
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Tao, Tao; Wyer, Robert S; Zheng, Yuhuang
2017-03-01
We propose a two-process conceptualization of numerical information processing to describe how people form impressions of a score that is described along a bounded scale. According to the model, people spontaneously categorize a score as high or low. Furthermore, they compare the numerical discrepancy between the score and the endpoint of the scale to which it is closer, if they are not confident of their categorization, and use implications of this comparison as a basis for judgment. As a result, their evaluation of the score is less extreme when the range of numbers along the scale is large (e.g., from 0 to 100) than when it is small (from 0 to 10). Six experiments support this two-process model and demonstrate its generalizability. Specifically, the magnitude of numbers composing the scale has less impact on judgments (a) when the score being evaluated is extreme, (b) when individuals are unmotivated to engage in endpoint comparison processes (i.e., they are low in need for cognition), and (c) when they are unable to do so (i.e., they are under cognitive load). Moreover, the endpoint to which individuals compare the score can depend on their regulatory focus. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Vullo, Carlos M; Romero, Magdalena; Catelli, Laura; Šakić, Mustafa; Saragoni, Victor G; Jimenez Pleguezuelos, María Jose; Romanini, Carola; Anjos Porto, Maria João; Puente Prieto, Jorge; Bofarull Castro, Alicia; Hernandez, Alexis; Farfán, María José; Prieto, Victoria; Alvarez, David; Penacino, Gustavo; Zabalza, Santiago; Hernández Bolaños, Alejandro; Miguel Manterola, Irati; Prieto, Lourdes; Parsons, Thomas
2016-03-01
The GHEP-ISFG Working Group has recognized the importance of assisting DNA laboratories to gain expertise in handling DVI or missing persons identification (MPI) projects which involve the need for large-scale genetic profile comparisons. Eleven laboratories participated in a DNA matching exercise to identify victims from a hypothetical conflict with 193 missing persons. The post mortem database was comprised of 87 skeletal remain profiles from a secondary mass grave displaying a minimal number of 58 individuals with evidence of commingling. The reference database was represented by 286 family reference profiles with diverse pedigrees. The goal of the exercise was to correctly discover re-associations and family matches. The results of direct matching for commingled remains re-associations were correct and fully concordant among all laboratories. However, the kinship analysis for missing persons identifications showed variable results among the participants. There was a group of laboratories with correct, concordant results but nearly half of the others showed discrepant results exhibiting likelihood ratio differences of several degrees of magnitude in some cases. Three main errors were detected: (a) some laboratories did not use the complete reference family genetic data to report the match with the remains, (b) the identity and/or non-identity hypotheses were sometimes wrongly expressed in the likelihood ratio calculations, and (c) many laboratories did not properly evaluate the prior odds for the event. The results suggest that large-scale profile comparisons for DVI or MPI is a challenge for forensic genetics laboratories and the statistical treatment of DNA matching and the Bayesian framework should be better standardized among laboratories. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Ice Shape Scaling for Aircraft in SLD Conditions
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2008-01-01
This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.
Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R
2012-01-01
In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.
Slater, L.E.; Burford, R.O.
1979-01-01
A comparison of creepmeter records from nine sites along a 12-km segment of the Calaveras fault near Hollister, California and long-baseline strain changes for nine lines in the Hollister multiwavelength distance-measuring (MWDM) array has established that episodes of large-scale deformation both preceded and accompanied periods of creep activity monitored along the fault trace during 1976. A concept of episodic, deep-seated aseismic slip that contributes to loading and subsequent aseismic failure of shallow parts of the fault plane seems attractive, implying that the character of aseismic slip sensed along the surface trace may be restricted to a relatively shallow (~ 1-km) region on the fault plane. Preliminary results from simple dislocation models designed to test the concept demonstrate that extending the time-histories and amplitudes of creep events sensed along the fault trace to depths of up to 10 km on the fault plane cannot simulate adequately the character and amplitudes of large-scale episodic movements observed at points more than 1 km from the fault. Properties of a 2-3-km-thick layer of unconsolidated sediments present in Hollister Valley, combined with an essentially rigid-block behavior in buried basement blocks, might be employed in the formulation of more appropriate models that could predict patterns of shallow fault creep and large-scale displacements much more like those actually observed. ?? 1979.
NASA Astrophysics Data System (ADS)
McFarland, Jacob A.; Reilly, David; Black, Wolfgang; Greenough, Jeffrey A.; Ranjan, Devesh
2015-07-01
The interaction of a small-wavelength multimodal perturbation with a large-wavelength inclined interface perturbation is investigated for the reshocked Richtmyer-Meshkov instability using three-dimensional simulations. The ares code, developed at Lawrence Livermore National Laboratory, was used for these simulations and a detailed comparison of simulation results and experiments performed at the Georgia Tech Shock Tube facility is presented first for code validation. Simulation results are presented for four cases that vary in large-wavelength perturbation amplitude and the presence of secondary small-wavelength multimode perturbations. Previously developed measures of mixing and turbulence quantities are presented that highlight the large variation in perturbation length scales created by the inclined interface and the multimode complex perturbation. Measures are developed for entrainment, and turbulence anisotropy that help to identify the effects of and competition between each perturbations type. It is shown through multiple measures that before reshock the flow processes a distinct memory of the initial conditions that is present in both large-scale-driven entrainment measures and small-scale-driven mixing measures. After reshock the flow develops to a turbulentlike state that retains a memory of high-amplitude but not low-amplitude large-wavelength perturbations. It is also shown that the high-amplitude large-wavelength perturbation is capable of producing small-scale mixing and turbulent features similar to the small-wavelength multimode perturbations.
Comparison of large-scale structures and velocities in the local universe
NASA Technical Reports Server (NTRS)
Yahil, Amos
1994-01-01
Comparison of the large-scale density and velocity fields in the local universe shows detailed agreement, strengthening the standard paradigm of the gravitational origin of these structures. Quantitative analysis can determine the cosmological density parameter, Omega, and biasing factor, b; there is virtually no sensitivity in any local analyses to the cosmologial constant, lambda. Comparison of the dipole anisotropy of the cosmic microwave background with the acceleration due to the Infrared Astronomy Satellite (IRAS) galaxies puts the linear growth factor in the range beta approximately equals Omega (exp 0.6)/b = 0.6(+0.7/-0.3) (95% confidence). A direct comparison of the density and velocity fields of nearby galaxies gives beta = 1.3 (+0.7/-0.6), and from nonlinear analysis the weaker limit (Omega greater than 0.45 for b greater than 0.5 (again 95% confidence). A tighter limit (Omega greater than 0.3 (4-6 sigma)), is obtained by a reconstruction of the probability distribution function of the initial fluctuations from which the structures observed today arose. The last two methods depend critically on the smooth velocity field determined from the observed velocities of nearby galaxies by the POTENT method. A new analysis of these velocities, with more than three times the data used to obtain the above quoted results, is now underway and promises to tighten the uncertainties considerably, as well as reduce systematic bias.
Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi
2015-01-01
Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.
The Effect of a State Department of Education Teacher Mentor Initiative on Science Achievement
NASA Astrophysics Data System (ADS)
Pruitt, Stephen L.; Wallace, Carolyn S.
2012-06-01
This study investigated the effectiveness of a southern state's department of education program to improve science achievement through embedded professional development of science teachers in the lowest performing schools. The Science Mentor Program provided content and inquiry-based coaching by teacher leaders to science teachers in their own classrooms. The study analyzed the mean scale scores for the science portion of the state's high school graduation test for the years 2004 through 2007 to determine whether schools receiving the intervention scored significantly higher than comparison schools receiving no intervention. The results showed that all schools achieved significant improvement of scale scores between 2004 and 2007, but there were no significant performance differences between intervention and comparison schools, nor were there any significant differences between various subgroups in intervention and comparison schools. However, one subgroup, economically disadvantaged (ED) students, from high-level intervention schools closed the achievement gap with ED students from no-intervention schools across the period of the study. The study provides important information to guide future research on and design of large-scale professional development programs to foster inquiry-based science.
A high-resolution European dataset for hydrologic modeling
NASA Astrophysics Data System (ADS)
Ntegeka, Victor; Salamon, Peter; Gomes, Goncalo; Sint, Hadewij; Lorini, Valerio; Thielen, Jutta
2013-04-01
There is an increasing demand for large scale hydrological models not only in the field of modeling the impact of climate change on water resources but also for disaster risk assessments and flood or drought early warning systems. These large scale models need to be calibrated and verified against large amounts of observations in order to judge their capabilities to predict the future. However, the creation of large scale datasets is challenging for it requires collection, harmonization, and quality checking of large amounts of observations. For this reason, only a limited number of such datasets exist. In this work, we present a pan European, high-resolution gridded dataset of meteorological observations (EFAS-Meteo) which was designed with the aim to drive a large scale hydrological model. Similar European and global gridded datasets already exist, such as the HadGHCND (Caesar et al., 2006), the JRC MARS-STAT database (van der Goot and Orlandi, 2003) and the E-OBS gridded dataset (Haylock et al., 2008). However, none of those provide similarly high spatial resolution and/or a complete set of variables to force a hydrologic model. EFAS-Meteo contains daily maps of precipitation, surface temperature (mean, minimum and maximum), wind speed and vapour pressure at a spatial grid resolution of 5 x 5 km for the time period 1 January 1990 - 31 December 2011. It furthermore contains calculated radiation, which is calculated by using a staggered approach depending on the availability of sunshine duration, cloud cover and minimum and maximum temperature, and evapotranspiration (potential evapotranspiration, bare soil and open water evapotranspiration). The potential evapotranspiration was calculated using the Penman-Monteith equation with the above-mentioned meteorological variables. The dataset was created as part of the development of the European Flood Awareness System (EFAS) and has been continuously updated throughout the last years. The dataset variables are used as inputs to the hydrological calibration and validation of EFAS as well as for establishing long-term discharge "proxy" climatologies which can then in turn be used for statistical analysis to derive return periods or other time series derivatives. In addition, this dataset will be used to assess climatological trends in Europe. Unfortunately, to date no baseline dataset at the European scale exists to test the quality of the herein presented data. Hence, a comparison against other existing datasets can therefore only be an indication of data quality. Due to availability, a comparison was made for precipitation and temperature only, arguably the most important meteorological drivers for hydrologic models. A variety of analyses was undertaken at country scale against data reported to EUROSTAT and E-OBS datasets. The comparison revealed that while the datasets showed overall similar temporal and spatial patterns, there were some differences in magnitudes especially for precipitation. It is not straightforward to define the specific cause for these differences. However, in most cases the comparatively low observation station density appears to be the principal reason for the differences in magnitude.
A Comparison of Obsessive-Compulsive Personality Disorder Scales
Samuel, Douglas B.; Widiger, Thomas A.
2010-01-01
The current study utilized a large undergraduate sample (n = 536), oversampled for DSM-IV-TR obsessive-compulsive personality disorder (OCPD) pathology, to compare eight self-report measures of OCPD. No prior study has compared more than three measures and the results indicated that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to two external reference points: Their relationships with a well established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison the results suggested important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, while some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder. PMID:20408023
Comfort, Alison B.; van Dijk, Janneke H.; Mharakurwa, Sungano; Stillman, Kathryn; Gabert, Rose; Korde, Sonali; Nachbar, Nancy; Derriennic, Yann; Musau, Stephen; Hamazakaza, Petan; Zyambo, Khozya D.; Zyongwe, Nancy M.; Hamainza, Busiku; Thuma, Philip E.
2014-01-01
There is little evidence on the impact of malaria control on the health system, particularly at the facility level. Using retrospective, longitudinal facility-level and patient record data from two hospitals in Zambia, we report a pre-post comparison of hospital admissions and outpatient visits for malaria and estimated costs incurred for malaria admissions before and after malaria control scale-up. The results show a substantial reduction in inpatient admissions and outpatient visits for malaria at both hospitals after the scale-up, and malaria cases accounted for a smaller proportion of total hospital visits over time. Hospital spending on malaria admissions also decreased. In one hospital, malaria accounted for 11% of total hospital spending before large-scale malaria control compared with < 1% after malaria control. The findings demonstrate that facility-level resources are freed up as malaria is controlled, potentially making these resources available for other diseases and conditions. PMID:24218409
A Large-Scale Evaluation of an Intelligent Discovery World: Smithtown.
ERIC Educational Resources Information Center
Shute, Valerie J.; Glaser, Robert
1990-01-01
Presents an evaluation of "Smithtown," an intelligent tutoring system designed to teach inductive inquiry skills and principles of basic microeconomics. Two studies of individual differences in learning are described, including a comparison of knowledge acquisition with traditional instruction; hypotheses tested are discussed; and the…
Testing the DQP: What Was Learned about Learning Outcomes?
ERIC Educational Resources Information Center
Ickes, Jessica L.; Flowers, Daniel R.
2015-01-01
Through a campuswide project using the Degree Qualifications Profile (DQP) as a comparison tool that engaged students and faculty, the authors share findings and implications about learning outcomes for IR professionals and DQP authors while considering the role of IR in large-scale, campuswide projects.
COMPARISON OF THE SINK CHARACTERISTICS OF THREE FULL-SCALE ENVIRONMENTAL CHAMBERS
The paper gives results of an investigation of the interaction of vapor-phase organic compounds with the interior surfaces of three large dynamic test chambers. A pattern of adsorption and reemission of the test compounds was observed in all three chambers. Quantitative compari...
NASA Astrophysics Data System (ADS)
Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.
2018-04-01
Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.
Power-law versus log-law in wall-bounded turbulence: A large-eddy simulation perspective
NASA Astrophysics Data System (ADS)
Cheng, W.; Samtaney, R.
2014-01-01
The debate whether the mean streamwise velocity in wall-bounded turbulent flows obeys a log-law or a power-law scaling originated over two decades ago, and continues to ferment in recent years. As experiments and direct numerical simulation can not provide sufficient clues, in this study we present an insight into this debate from a large-eddy simulation (LES) viewpoint. The LES organically combines state-of-the-art models (the stretched-vortex model and inflow rescaling method) with a virtual-wall model derived under different scaling law assumptions (the log-law or the power-law by George and Castillo ["Zero-pressure-gradient turbulent boundary layer," Appl. Mech. Rev. 50, 689 (1997)]). Comparison of LES results for Reθ ranging from 105 to 1011 for zero-pressure-gradient turbulent boundary layer flows are carried out for the mean streamwise velocity, its gradient and its scaled gradient. Our results provide strong evidence that for both sets of modeling assumption (log law or power law), the turbulence gravitates naturally towards the log-law scaling at extremely large Reynolds numbers.
An evaluation of proposed acoustic treatments for the NASA LaRC 4 x 7 meter wind tunnel
NASA Technical Reports Server (NTRS)
Abrahamson, A. L.
1985-01-01
The NASA LaRC 4 x 7 Meter Wind Tunnel is an existing facility specially designed for powered low speed (V/STOL) testing of large scale fixed wing and rotorcraft models. The enhancement of the facility for scale model acoustic testing is examined. The results are critically reviewed and comparisons are drawn with a similar wind tunnel (the DNW Facility in the Netherlands). Discrepancies observed in the comparison stimulated a theoretical investigation using the acoustic finite element ADAM System, of the ways in which noise propagating around the tunnel circuit radiates into the open test section. The reasons for the discrepancies noted above are clarified and assists in the selection of acoustic treatment options for the facility.
NASA Technical Reports Server (NTRS)
Dodge, R. N.; Clark, S. K.
1981-01-01
The properties were measured during static, slow rolling, and high-speed tests, and comparisons were made between data as acquired on indoor drum dynamometers and on an outdoor test track. In addition, mechanical properties were also obtained from scale model tires and compared with corresponding properties from full-size tires. While the tests covered a wide range of tire properties, results seem to indicate that speed effects are not large, scale models may be used for obtaining some but not all tire properties, and that predictive equations developed in NASA TR R-64 are still useful in estimating most mechanical properties.
Mandák, Bohumil; Hadincová, Věroslava; Mahelka, Václav; Wildová, Radka
2013-01-01
Background North American Pinus strobus is a highly invasive tree species in Central Europe. Using ten polymorphic microsatellite loci we compared various aspects of the large-scale genetic diversity of individuals from 30 sites in the native distribution range with those from 30 sites in the European adventive distribution range. To investigate the ascertained pattern of genetic diversity of this intercontinental comparison further, we surveyed fine-scale genetic diversity patterns and changes over time within four highly invasive populations in the adventive range. Results Our data show that at the large scale the genetic diversity found within the relatively small adventive range in Central Europe, surprisingly, equals the diversity found within the sampled area in the native range, which is about thirty times larger. Bayesian assignment grouped individuals into two genetic clusters separating North American native populations from the European, non-native populations, without any strong genetic structure shown over either range. In the case of the fine scale, our comparison of genetic diversity parameters among the localities and age classes yielded no evidence of genetic diversity increase over time. We found that SGS differed across age classes within the populations under study. Old trees in general completely lacked any SGS, which increased over time and reached its maximum in the sapling stage. Conclusions Based on (1) the absence of difference in genetic diversity between the native and adventive ranges, together with the lack of structure in the native range, and (2) the lack of any evidence of any temporal increase in genetic diversity at four highly invasive populations in the adventive range, we conclude that population amalgamation probably first happened in the native range, prior to introduction. In such case, there would have been no need for multiple introductions from previously isolated populations, but only several introductions from genetically diverse populations. PMID:23874648
When Transparency Obscures: The Political Spectacle of Accountability
ERIC Educational Resources Information Center
Koyama, Jill; Kania, Brian
2014-01-01
In the United States (US), an increase in standardization, quantification, competition, and large-scale comparison--cornerstones of neoliberal accountability--have been accompanied by devices of transparency, through which various forms of school data are made available to the public. Such public reporting, we are told by politicians and education…
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Patterns and processes of Mycobacterium bovis evolution revealed by phylogenomic analyses
USDA-ARS?s Scientific Manuscript database
Mycobacterium bovis is an important animal pathogen worldwide that parasitizes wild and domesticated vertebrate livestock as well as humans. A comparison of the five M. bovis complete genomes from UK, South Korea, Brazil and USA revealed four novel large-scale structural variations of at least 2,000...
ERIC Educational Resources Information Center
Shi, Qingmin; Zhang, Shaoan; Lin, Emily
2014-01-01
Drawing on large-scale international teachers' data from Hungary, Korea, Norway, and Turkey in the Teaching and Learning International Survey in 2008 assessment, this study examined the relationships between new teachers' beliefs about instruction (direct transmission and constructivist beliefs) and teaching practices (structured, student…
Limited Aspects of Reality: Frames of Reference in Language Assessment
ERIC Educational Resources Information Center
Fulcher, Glenn; Svalberg, Agneta
2013-01-01
Language testers operate within two frames of reference: norm-referenced (NRT) and criterion-referenced testing (CRT). The former underpins the world of large-scale standardized testing that prioritizes variability and comparison. The latter supports substantive score meaning in formative and domain specific assessment. Some claim that the…
What Makes Professional Development Effective? Results from a National Sample of Teachers.
ERIC Educational Resources Information Center
Garet, Michael S.; Porter, Andrew C.; Desimone, Laura; Birman, Beatrice F.; Yoon, Kwang Suk
2001-01-01
Used a national probability sample of 1,027 mathematics and science teachers to provide a large-scale empirical comparison of effects of different characteristics of professional development on teachers' learning. Results identify three core features of professional development that have significant positive effects on teachers' self-reported…
Forest-water reuse (FWR) systems treat municipal, industrial, and agricultural wastewaters via land application to forest soils. Previous studies have shown that both large-scale conventional wastewater treatment plants (WWTPs) and FWR systems do not completely remove many contam...
Behavioral self-organization underlies the resilience of a coastal ecosystem.
de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R; Herman, Peter M J; van de Koppel, Johan
2017-07-25
Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds ( Mytilus edulis ) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance.
Behavioral self-organization underlies the resilience of a coastal ecosystem
de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R.; Herman, Peter M. J.
2017-01-01
Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds (Mytilus edulis) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance. PMID:28696313
Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F
2012-04-01
Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.
Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches
Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand
2018-01-01
Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086
Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate
Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.
2015-01-01
Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
Method for revealing biases in precision mass measurements
NASA Astrophysics Data System (ADS)
Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.
2013-02-01
A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.
Large - scale Rectangular Ruler Automated Verification Device
NASA Astrophysics Data System (ADS)
Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie
2018-03-01
This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.
A conceptual design of shock-eliminating clover combustor for large scale scramjet engine
NASA Astrophysics Data System (ADS)
Sun, Ming-bo; Zhao, Yu-xin; Zhao, Guo-yan; Liu, Yuan
2017-01-01
A new concept of shock-eliminating clover combustor is proposed for large scale scramjet engine to fulfill the requirements of fuel penetration, total pressure recovery and cooling. To generate the circular-to-clover transition shape of the combustor, the streamline tracing technique is used based on an axisymmetric expansion parent flowfield calculated using the method of characteristics. The combustor is examined using inviscid and viscous numerical simulations and a pure circular shape is calculated for comparison. The results showed that the combustor avoids the shock wave generation and produces low total pressure losses in a wide range of flight condition with various Mach number. The flameholding device for this combustor is briefly discussed.
Grid sensitivity capability for large scale structures
NASA Technical Reports Server (NTRS)
Nagendra, Gopal K.; Wallerstein, David V.
1989-01-01
The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.
NASA Astrophysics Data System (ADS)
Hixson, J.; Ward, A. S.; Schmadel, N.
2015-12-01
The exchange of water and solutes across the stream-hyporheic-riparian-hillslope continuum is controlled by the interaction of dynamic hydrological processes with the underlying geological setting. Our current understanding of exchange processes is primarily based on field observations collected during baseflow conditions, with few studies considering time-variable stream-aquifer interactions during storm events. We completed ten sets of four in-stream tracer slug injections during and after a large storm event in a headwater catchment at the H.J. Andrews Experimental Forest, Oregon. The injections were performed in three adjacent 50-meter study reaches, enabling comparison of spatial heterogeneity in transport processes. Reach-scale data demonstrate apparent trends with discharge in both transient storage and long-term storage (commonly "channel water balance"). Comparison of flowpath-scale observations from a network of monitoring wells to reach-scale observations showed that the advective timescale changed with discharge making it difficult to infer process from simple, reach-scale tracer studies. Overall, our results highlight the opportunities and challenges for interpretation of multi-scale solute tracer data along the stream-hyporheic-riparian-hillslope continuum.
Brankaer, Carmen; Ghesquière, Pol; De Smedt, Bert
2017-08-01
The ability to compare symbolic numerical magnitudes correlates with children's concurrent and future mathematics achievement. We developed and evaluated a quick timed paper-and-pencil measure that can easily be used, for example in large-scale research, in which children have to cross out the numerically larger of two Arabic one- and two-digit numbers (SYMP Test). We investigated performance on this test in 1,588 primary school children (Grades 1-6) and examined in each grade its associations with mathematics achievement. The SYMP Test had satisfactory test-retest reliability. The SYMP Test showed significant and stable correlations with mathematics achievement for both one-digit and two-digit comparison, across all grades. This replicates the previously observed association between symbolic numerical magnitude processing and mathematics achievement, but extends it by showing that the association is observed in all grades in primary education and occurs for single- as well as multi-digit processing. Children with mathematical learning difficulties performed significantly lower on one-digit comparison and two-digit comparison in all grades. This all suggests satisfactory construct and criterion-related validity of the SYMP Test, which can be used in research, when performing large-scale (intervention) studies, and by practitioners, as screening measure to identify children at risk for mathematical difficulties or dyscalculia.
Size and structure of Chlorella zofingiensis /FeCl 3 flocs in a shear flow: Algae Floc Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyatt, Nicholas B.; O'Hern, Timothy J.; Shelden, Bion
Flocculation is a promising method to overcome the economic hurdle to separation of algae from its growth medium in large scale operations. But, understanding of the floc structure and the effects of shear on the floc structure are crucial to the large scale implementation of this technique. The floc structure is important because it determines, in large part, the density and settling behavior of the algae. Freshwater algae floc size distributions and fractal dimensions are presented as a function of applied shear rate in a Couette cell using ferric chloride as a flocculant. Comparisons are made with measurements made formore » a polystyrene microparticle model system taken here as well as reported literature results. The algae floc size distributions are found to be self-preserving with respect to shear rate, consistent with literature data for polystyrene. Moreover, three fractal dimensions are calculated which quantitatively characterize the complexity of the floc structure. Low shear rates result in large, relatively dense packed flocs which elongate and fracture as the shear rate is increased. Our results presented here provide crucial information for economically implementing flocculation as a large scale algae harvesting strategy.« less
Scale-dependent temporal variations in stream water geochemistry.
Nagorski, Sonia A; Moore, Iohnnie N; McKinnon, Temple E; Smith, David B
2003-03-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
Scale-dependent temporal variations in stream water geochemistry
Nagorski, S.A.; Moore, J.N.; McKinnon, Temple E.; Smith, D.B.
2003-01-01
A year-long study of four western Montana streams (two impacted by mining and two "pristine") evaluated surface water geochemical dynamics on various time scales (monthly, daily, and bi-hourly). Monthly changes were dominated by snowmelt and precipitation dynamics. On the daily scale, post-rain surges in some solute and particulate concentrations were similar to those of early spring runoff flushing characteristics on the monthly scale. On the bi-hourly scale, we observed diel (diurnal-nocturnal) cycling for pH, dissolved oxygen, water temperature, dissolved inorganic carbon, total suspended sediment, and some total recoverable metals at some or all sites. A comparison of the cumulative geochemical variability within each of the temporal groups reveals that for many water quality parameters there were large overlaps of concentration ranges among groups. We found that short-term (daily and bi-hourly) variations of some geochemical parameters covered large proportions of the variations found on a much longer term (monthly) time scale. These results show the importance of nesting short-term studies within long-term geochemical study designs to separate signals of environmental change from natural variability.
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, John K.
1991-01-01
Direct numerical simulation is used to study dispersion in decaying isotropic turbulence and homogeneous shear flow. Both Lagrangian and Eulerian data are presented allowing direct comparison, but at fairly low Reynolds number. The quantities presented include properties of the dispersion tensor, isoprobability contours of particle displacement, Lagrangian and Eulerian velocity autocorrelations and time scale ratios, and the eddy diffusivity tensor. The Lagrangian time microscale is found to be consistently larger than the Eulerian microscale, presumably due to the advection of the small scales by the large scales in the Eulerian reference frame.
CoCoNUT: an efficient system for the comparison and analysis of genomes
2008-01-01
Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477
A survey on routing protocols for large-scale wireless sensor networks.
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
2016-01-01
Recently a very large (739 runs) collection of high-fidelity RANS CFD solutions was obtained for Space Launch System ascent aerodynamics for the vehicle to be used for the first exploratory (unmanned) mission (EM-1). The extensive computations, at full-scale conditions, were originally developed to obtain detailed line and protuberance loads and surface pressures for venting analyses. The line loads were eventually integrated for comparison of the resulting forces and moments to the database that was derived from wind tunnel tests conducted at sub-scale conditions. The comparisons presented herein cover the ranges 0.5 < or = M(infinity) < or = 5, -6deg < or = alpha < or = 6deg, and -6deg < or = beta < or = 6deg. For detailed comparisons, slender-body-theory-based component build-up aero models from missile aerodynamics are used. The differences in the model fit coefficients are shown to be relatively small except for the low supersonic Mach number range, 1.1 < or = M(infinity) < or = 2.0. The analysis is intended to support process improvement and development of uncertainty models.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Massive Boson Production at Small qT in Soft-Collinear Effective Theory
NASA Astrophysics Data System (ADS)
Becher, Thomas; Neubert, Matthias; Wilhelm, Daniel
2013-01-01
We study the differential cross sections for electroweak gauge-boson and Higgs production at small and very small transverse-momentum qT. Large logarithms are resummed using soft-collinear effective theory. The collinear anomaly generates a non-perturbative scale q*, which protects the processes from receiving large long-distance hadronic contributions. A numerical comparison of our predictions with data on the transverse-momentum distribution in Z-boson production at the Tevatron and LHC is given.
NASA Astrophysics Data System (ADS)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.
NASA Technical Reports Server (NTRS)
Hussain, A. K. M. F.
1980-01-01
Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Khee-Gan; Hennawi, Joseph F.; Eilers, Anna-Christina
2014-11-01
We present the first observations of foreground Lyα forest absorption from high-redshift galaxies, targeting 24 star-forming galaxies (SFGs) with z ∼ 2.3-2.8 within a 5' × 14' region of the COSMOS field. The transverse sightline separation is ∼2 h {sup –1} Mpc comoving, allowing us to create a tomographic reconstruction of the three-dimensional (3D) Lyα forest absorption field over the redshift range 2.20 ≤ z ≤ 2.45. The resulting map covers 6 h {sup –1} Mpc × 14 h {sup –1} Mpc in the transverse plane and 230 h {sup –1} Mpc along the line of sight with a spatialmore » resolution of ≈3.5 h {sup –1} Mpc, and is the first high-fidelity map of a large-scale structure on ∼Mpc scales at z > 2. Our map reveals significant structures with ≳ 10 h {sup –1} Mpc extent, including several spanning the entire transverse breadth, providing qualitative evidence for the filamentary structures predicted to exist in the high-redshift cosmic web. Simulated reconstructions with the same sightline sampling, spectral resolution, and signal-to-noise ratio recover the salient structures present in the underlying 3D absorption fields. Using data from other surveys, we identified 18 galaxies with known redshifts coeval with our map volume, enabling a direct comparison with our tomographic map. This shows that galaxies preferentially occupy high-density regions, in qualitative agreement with the same comparison applied to simulations. Our results establish the feasibility of the CLAMATO survey, which aims to obtain Lyα forest spectra for ∼1000 SFGs over ∼1 deg{sup 2} of the COSMOS field, in order to map out the intergalactic medium large-scale structure at (z) ∼ 2.3 over a large volume (100 h {sup –1} Mpc){sup 3}.« less
Geller, Ruth; Bear, Todd M.; Foulds, Abigail L.; Duell, Jessica; Sharma, Ravi
2015-01-01
Background. Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. Objective. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Methods. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Results. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Conclusions. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates. PMID:26273310
The Politics of Education Revisited: Anthony Crosland and Michael Gove in Historical Perspective
ERIC Educational Resources Information Center
Finn, Mike
2015-01-01
This article traces continuity and change in the governance of British education through the comparison of two ministers, Anthony Crosland and Michael Gove. Taking Maurice Kogan's seminal "The Politics of Education" as the point of departure, the article highlights the role of political ideology in large-scale educational change, taking…
Uncovering the Hidden Meaning of Cross-Curriculum Comparison Results on the Force Concept Inventory
ERIC Educational Resources Information Center
Ding, Lin; Caballero, Marcos D.
2014-01-01
In a recent study, Caballero and colleagues conducted a large-scale evaluation using the Force Concept Inventory (FCI) to compare student learning outcomes between two introductory physics curricula: the Matter and Interactions (M&I) mechanics course and a pedagogically-reformed-traditional-content (PRTC) mechanics course. Using a conventional…
Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories
ERIC Educational Resources Information Center
Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher
2009-01-01
Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…
Fluid Dynamic Aspects of Wind Energy Conversion
1979-07-01
numbers, i.e. for relatively large-scale turbines). This might be acceptable for a single production unit (reduction of tooling costs), but looses its...Behind a Wing via Comparison of Measurements and Calculations. NLR TR 74063 U (July 1974). 4.17 Foley, W.M.: From DaVinci to the Piesent - a Review of
ERIC Educational Resources Information Center
Tuohilampi, Laura; Hannula, Markku S.; Varas, Leonor; Giaconi, Valentina; Laine, Anu; Näveri, Liisa; i Nevado, Laia Saló
2015-01-01
Large-scale studies measure mathematics-related affect using questionnaires developed by researchers in primarily English-based countries and according to Western-based theories. Influential comparative conclusions about different cultures and countries are drawn based on such measurements. However, there are certain premises involved in these…
COMPARISON OF STABLE-NITROGEN (15N/14N) ISOTOPE RATIOS IN LARGE MOUTH BASS SCALES AND MUSCLE TISSUE
Stable-nitrogen (15N/14N) isotope ratios of fish tissue are currently used to determine trophic structure, contaminant bioaccumulation, and the level of anthropogenic nitrogen enrichment in aquatic systems. The most common tissue used for these measurements is fileted dorsal musc...
Cognitive Ability and Everyday Functioning in Women with Turner Syndrome.
ERIC Educational Resources Information Center
Downey, Jennifer; And Others
1991-01-01
Comparison of 23 Turner syndrome (TUS) women with 23 women with constitutional short stature (CSS) found significant group differences for Performance and Full Scale IQ, largely due to TUS women's deficits in spatial and mathematical ability. TUS individuals had significantly lower educational and occupational attainment than CSS controls but did…
Restoration of bottomland hardwood forest across a treatment intensity gradient.
J.A Stanturf; E.S Gardiner; J.P Shepard; C.J Schweitzer; C.J Portwood; L.C Dorros
2009-01-01
Large-scale restoration of bottomland hardwood forests in the lower Mississippi Alluvial Valley (USA)under federal incentive programs, begun in the 1990s. initially achieved mixed results. We report here on a comparison of four restoration techniques in terms of survival. accretion of vertical structure and woody species diversity. The...
Selective logging in the Brazilian Amazon.
G. P. Asner; D. E. Knapp; E. N. Broadbent; P. J. C. Oliveira; M Keller; J. N. Silva
2005-01-01
Amazon deforestation has been measured by remote sensing for three decades. In comparison, selective logging has been mostly invisible to satellites. We developed a large-scale, high-resolution, automated remote-sensing analysis of selective logging in the top five timber-producing states of the Brazilian Amazon. Logged areas ranged from 12,075 to 19,823 square...
An Empirical Comparison of Variable Standardization Methods in Cluster Analysis.
ERIC Educational Resources Information Center
Schaffer, Catherine M.; Green, Paul E.
1996-01-01
The common marketing research practice of standardizing the columns of a persons-by-variables data matrix prior to clustering the entities corresponding to the rows was evaluated with 10 large-scale data sets. Results indicate that the column standardization practice may be problematic for some kinds of data that marketing researchers used for…
Mass Media and Development in Indonesia.
ERIC Educational Resources Information Center
Alfian; And Others
A large-scale pre-television benchmark survey was undertaken in five Indonesian provinces in 1976, prior to the launching of Indonesia's telecommunications satellite, to provide data for comparison with the results of a survey of the same villages to be carried out in 1982, 5 years after the introduction of television, to assess its long-term…
ERIC Educational Resources Information Center
Taylor, Catherine G.; Meyer, Elizabeth J.; Peter, Tracey; Ristock, Janice; Short, Donn; Campbell, Christopher
2016-01-01
The Every Teacher Project involved large-scale survey research conducted to identify the beliefs, perspectives, and practices of Kindergarten to Grade 12 educators in Canadian public schools regarding lesbian, gay, bisexual, transgender, and queer (LGBTQ)-inclusive education. Comparisons are made between LGBTQ and cisgender heterosexual…
Chalise, D. R.; Haj, Adel E.; Fontaine, T.A.
2018-01-01
The hydrological simulation program Fortran (HSPF) [Hydrological Simulation Program Fortran version 12.2 (Computer software). USEPA, Washington, DC] and the precipitation runoff modeling system (PRMS) [Precipitation Runoff Modeling System version 4.0 (Computer software). USGS, Reston, VA] models are semidistributed, deterministic hydrological tools for simulating the impacts of precipitation, land use, and climate on basin hydrology and streamflow. Both models have been applied independently to many watersheds across the United States. This paper reports the statistical results assessing various temporal (daily, monthly, and annual) and spatial (small versus large watershed) scale biases in HSPF and PRMS simulations using two watersheds in the Black Hills, South Dakota. The Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (r">rr), and coefficient of determination (R2">R2R2) statistics for the daily, monthly, and annual flows were used to evaluate the models’ performance. Results from the HSPF models showed that the HSPF consistently simulated the annual flows for both large and small basins better than the monthly and daily flows, and the simulated flows for the small watershed better than flows for the large watershed. In comparison, the PRMS model results show that the PRMS simulated the monthly flows for both the large and small watersheds better than the daily and annual flows, and the range of statistical error in the PRMS models was greater than that in the HSPF models. Moreover, it can be concluded that the statistical error in the HSPF and the PRMSdaily, monthly, and annual flow estimates for watersheds in the Black Hills was influenced by both temporal and spatial scale variability.
Improved Blood Pressure Control Associated With a Large-Scale Hypertension Program
Jaffe, Marc G.; Lee, Grace A.; Young, Joseph D.; Sidney, Stephen; Go, Alan S.
2014-01-01
Importance Hypertension control for large populations remains a major challenge. Objective To describe a large-scale hypertension program in northern California and to compare rates of hypertension control of the program to statewide and national estimates. Design, Setting, and Patients The Kaiser Permanente Northern California (KPNC) Hypertension program included a multi-faceted approach to blood pressure control. Patients identified with hypertension within an integrated health care delivery system in northern California from 2001–2009 were included. The comparison group included insured patients in California between 2006–2009 who were included in the Healthcare Effectiveness Data and Information Set (HEDIS) commercial measurement by California health insurance plans participating in the National Committee for Quality Assurance (NQCA) quality measure reporting process. A secondary comparison group was the reported national mean NCQA HEDIS commercial rates of hypertension control from 2001–2009 from health plans that participated in the NQCA HEDIS quality measure reporting process. Main Outcome Measure Hypertension control as defined by NCQA HEDIS. Results The KPNC hypertension registry established in 2001 included 349,937 patients and grew to 652,763 by 2009. The NCQA HEDIS commercial measurement for hypertension control increased from 44% to 80% during the study period. In contrast, the national mean NCQA HEDIS commercial measurement increased modestly from 55.4% to 64.1%. California mean NCQA HEDIS commercial rates of hypertension were similar to those reported nationally from 2006–2009. (63.4% to 69.4%). Conclusion and Relevance Among adults diagnosed with hypertension, implementation of a large-scale hypertension program was associated with a significant increase in hypertension control compared with state and national control rates. PMID:23989679
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Large deviations in the presence of cooperativity and slow dynamics
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-06-01
We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.
A comparison of obsessive-compulsive personality disorder scales.
Samuel, Douglas B; Widiger, Thomas A
2010-05-01
In this study, we utilized a large undergraduate sample (N = 536), oversampled for the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision [DSM-IV-TR]; American Psychiatric Association, 2000) obsessive-compulsive personality disorder (OCPD) pathology, to compare 8 self-report measures of OCPD. No prior study has compared more than 3 measures, and the results indicate that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to 2 external reference points: their relationships with a well-established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison, the results suggest important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, whereas some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder.
Poverty-alleviation program participation and salivary cortisol in very low-income children.
Fernald, Lia C H; Gunnar, Megan R
2009-06-01
Correlational studies have shown associations between social class and salivary cortisol suggestive of a causal link between childhood poverty and activity of the stress-sensitive hypothalamic-pituitary-adrenocortical (HPA) system. Using a quasi-experimental design, we evaluated the associations between a family's participation in a large-scale, conditional cash transfer program in Mexico (Oportunidades, formerly Progresa) during the child's early years of life and children's salivary cortisol (baseline and responsivity). We also examined whether maternal depressive symptoms moderated the effect of program participation. Low-income households (income <20th percentile nationally) from rural Mexico were enrolled in a large-scale poverty-alleviation program between 1998 and 1999. A comparison group of households from demographically similar communities was recruited in 2003. Following 3.5 years of participation in the Oportunidades program, three saliva samples were obtained from children aged 2-6 years from intervention and comparison households (n=1197). Maternal depressive symptoms were obtained using the Center for Epidemiologic Studies-Depression Scale (CES-D). Results were that children who had been in the Oportunidades program had lower salivary cortisol levels when compared with those who had not participated in the program, while controlling for a wide range of individual-, household- and community-level variables. Reactivity patterns of salivary cortisol did not differ between intervention and comparison children. Maternal depression moderated the association between Oportunidades program participation and baseline salivary cortisol in children. Specifically, there was a large and significant Oportunidades program effect of lowering cortisol in children of mothers with high depressive symptoms but not in children of mothers with low depressive symptomatology. These findings provide the strongest evidence to date that the economic circumstances of a family can influence a child's developing stress system and provide a mechanism through which poverty early in life could alter life-course risk for physical and mental health disorders.
Comparison of aquifer characteristics derived from local and regional aquifer tests.
Randolph, R.B.; Krause, R.E.; Maslia, M.L.
1985-01-01
A comparison of the aquifer parameter values obtained through the analysis of a local and a regional aquifer test involving the same area in southeast Georgia is made in order to evaluate the validity of extrapolating local aquifer-test results for use in large-scale flow simulations. Time-drawdown and time-recovery data were analyzed by using both graphical and least-squares fitting of the data to the Theis curve. Additionally, directional transmissivity, transmissivity tensor, and angle of anisotropy were computed for both tests. -from Authors Georgia drawdown transmissivity regional aquifer tests
A comparison of contour maps derived from independent methods of measuring lunar magnetic fields
NASA Technical Reports Server (NTRS)
Lichtenstein, B. R.; Coleman, P. J., Jr.; Russell, C. T.
1978-01-01
Computer-generated contour maps of strong lunar remanent magnetic fields are presented and discussed. The maps, obtained by previously described (Eliason and Soderblom, 1977) techniques, are derived from a variety of direct and indirect measurements from Apollo 15 and 16 and Explorer 35 magnetometer and electron reflection data. A common display format is used to facilitate comparison of the maps over regions of overlapping coverage. Most large scale features of either weak or strong magnetic field regions are found to correlate fairly well on all the maps considered.
Comparisons for ESTA-Task3: ASTEC, CESAM and CLÉS
NASA Astrophysics Data System (ADS)
Christensen-Dalsgaard, J.
The ESTA activity under the CoRoT project aims at testing the tools for computing stellar models and oscillation frequencies that will be used in the analysis of asteroseismic data from CoRoT and other large-scale upcoming asteroseismic projects. Here I report results of comparisons between calculations using the Aarhus code (ASTEC) and two other codes, for models that include diffusion and settling. It is found that there are likely deficiencies, requiring further study, in the ASTEC computation of models including convective cores.
Ecological impacts of large-scale disposal of mining waste in the deep sea
Hughes, David J.; Shimmield, Tracy M.; Black, Kenneth D.; Howe, John A.
2015-01-01
Deep-Sea Tailings Placement (DSTP) from terrestrial mines is one of several large-scale industrial activities now taking place in the deep sea. The scale and persistence of its impacts on seabed biota are unknown. We sampled around the Lihir and Misima island mines in Papua New Guinea to measure the impacts of ongoing DSTP and assess the state of benthic infaunal communities after its conclusion. At Lihir, where DSTP has operated continuously since 1996, abundance of sediment infauna was substantially reduced across the sampled depth range (800–2020 m), accompanied by changes in higher-taxon community structure, in comparison with unimpacted reference stations. At Misima, where DSTP took place for 15 years, ending in 2004, effects on community composition persisted 3.5 years after its conclusion. Active tailings deposition has severe impacts on deep-sea infaunal communities and these impacts are detectable at a coarse level of taxonomic resolution. PMID:25939397
Statistical properties of edge plasma turbulence in the Large Helical Device
NASA Astrophysics Data System (ADS)
Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.
2008-09-01
Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.
A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean
NASA Technical Reports Server (NTRS)
Ponte, Rui M.
1998-01-01
Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.
Ecological impacts of large-scale disposal of mining waste in the deep sea.
Hughes, David J; Shimmield, Tracy M; Black, Kenneth D; Howe, John A
2015-05-05
Deep-Sea Tailings Placement (DSTP) from terrestrial mines is one of several large-scale industrial activities now taking place in the deep sea. The scale and persistence of its impacts on seabed biota are unknown. We sampled around the Lihir and Misima island mines in Papua New Guinea to measure the impacts of ongoing DSTP and assess the state of benthic infaunal communities after its conclusion. At Lihir, where DSTP has operated continuously since 1996, abundance of sediment infauna was substantially reduced across the sampled depth range (800-2020 m), accompanied by changes in higher-taxon community structure, in comparison with unimpacted reference stations. At Misima, where DSTP took place for 15 years, ending in 2004, effects on community composition persisted 3.5 years after its conclusion. Active tailings deposition has severe impacts on deep-sea infaunal communities and these impacts are detectable at a coarse level of taxonomic resolution.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
NASA Technical Reports Server (NTRS)
Falarski, M. D.; Koenig, D. G.
1972-01-01
The investigation of the in-ground-effect, longitudinal aerodynamic characteristics of a large scale swept augmentor wing model is presented, using 40 x 80 ft wind tunnel. The investigation was conducted at three ground heights; h/c equals 2.01, 1.61, and 1.34. The induced effect of underwing nacelles, was studied with two powered nacelle configurations. One configuration used four JT-15D turbofans while the other used two J-85 turbojet engines. Two conical nozzles on each J-85 were used to deflect the thrust at angles from 0 to 120 deg. Tests were also performed without nacelles to allow comparison with previous data from ground effect.
Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.
2013-01-01
In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harringa, J.L.; Cook, B.A.
1996-06-01
Improvements to state-of-the-art Si{sub 80}Ge{sub 20} thermoelectric alloys have been observed in laboratory-scale samples by the powder metallurgy techniques of mechanical alloying and hot pressing. Incorporating these improvements in large scale compacts for the production of thermoelectric generator elements is the next step in achieving higher efficiency RTGs. This paper discusses consolidation of large quantities of mechanically alloyed powders into production size compacts. Differences in thermoelectric properties are noted between the compacts prepared by the standard technique of hot uniaxial pressing and hot isostatic pressing. Most significant is the difference in carrier concentration between the alloys prepared by the twomore » consolidation techniques.« less
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks
Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong
2011-01-01
With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Charting the Emergence of Corporate Procurement of Utility-Scale PV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny S.; Cook, Jeffrey J.; Bird, Lori A.
Through July 2017, corporate customers contracted for more than 2,300 MW of utility-scale solar. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing through four pathways: PPAs, retail choice, utility partnerships (green tariffs and bilateral contracts with utilities), and by becoming a licensed wholesale seller of electricity. Each pathway differs based on where in the United States it is available, the value provided to a corporate off-taker, and the ease of implementation. The paper concludes with a discussion of future pathway comparison, noting that to deploy more corporate off-site solar, new procurement pathways are needed.
Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette
2013-06-01
High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
3D reconstruction software comparison for short sequences
NASA Astrophysics Data System (ADS)
Strupczewski, Adam; Czupryński, BłaŻej
2014-11-01
Large scale multiview reconstruction is recently a very popular area of research. There are many open source tools that can be downloaded and run on a personal computer. However, there are few, if any, comparisons between all the available software in terms of accuracy on small datasets that a single user can create. The typical datasets for testing of the software are archeological sites or cities, comprising thousands of images. This paper presents a comparison of currently available open source multiview reconstruction software for small datasets. It also compares the open source solutions with a simple structure from motion pipeline developed by the authors from scratch with the use of OpenCV and Eigen libraries.
A novel heuristic algorithm for capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre
2017-09-01
The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk
2016-07-18
Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.
Structure analysis for hole-nuclei close to 132Sn by a large-scale shell-model calculation
NASA Astrophysics Data System (ADS)
Wang, Han-Kui; Sun, Yang; Jin, Hua; Kaneko, Kazunari; Tazaki, Shigeru
2013-11-01
The structure of neutron-rich nuclei with a few holes in respect of the doubly magic nucleus 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including orbitals allowing both neutron and proton core excitations, an effective interaction for the extended pairing-plus-quadrupole model with monopole corrections is tested through detailed comparison between the calculation and experimental data. By using the experimental energy of the core-excited 21/2+ level in 131In as a benchmark, monopole corrections are determined that describe the size of the neutron N=82 shell gap. The level spectra, up to 5 MeV of excitation in 131In, 131Sn, 130In, 130Cd, and 130Sn, are well described and clearly explained by couplings of single-hole orbitals and by core excitations.
Skyscape Archaeology: an emerging interdiscipline for archaeoastronomers and archaeologists
NASA Astrophysics Data System (ADS)
Henty, Liz
2016-02-01
For historical reasons archaeoastronomy and archaeology differ in their approach to prehistoric monuments and this has created a divide between the disciplines which adopt seemingly incompatible methodologies. The reasons behind the impasse will be explored to show how these different approaches gave rise to their respective methods. Archaeology investigations tend to concentrate on single site analysis whereas archaeoastronomical surveys tend to be data driven from the examination of a large number of similar sets. A comparison will be made between traditional archaeoastronomical data gathering and an emerging methodology which looks at sites on a small scale and combines archaeology and astronomy. Silva's recent research in Portugal and this author's survey in Scotland have explored this methodology and termed it skyscape archaeology. This paper argues that this type of phenomenological skyscape archaeology offers an alternative to large scale statistical studies which analyse astronomical data obtained from a large number of superficially similar archaeological sites.
Crump, R. Trafford; Llewellyn-Thomas, Hilary A.
2012-01-01
Objective The objective was to determine whether a paired-comparison/leaning scale method: a) could feasibly be used to elicit strength-of-preference scores for elective health care options in large community-based survey settings; and b) could reveal preferential sub-groups that would have been overlooked if only a categorical-response format had been used. Study Design Medicare beneficiaries in four different regions of the United States were interviewed in person. Participants considered 8 clinical scenarios, each with 2 to 3 different health care options. For each scenario, participants categorically selected their favored option, then indicated how strongly they favored that option relative to the alternative on a paired-comparison bi-directional Leaning Scale. Results Two hundred and two participants were interviewed. For 7 of the 8 scenarios, a clear majority (> 50%) indicated that, overall, they categorically favored one option over the alternative(s). However, the bi-directional strength-of-preference Leaning Scale scores revealed that, in 4 scenarios, for half of those participants, their preference for the favored option was actually “weak” or “neutral”. Conclusion Investigators aiming to assess population-wide preferential attitudes towards different elective health care scenarios should consider gathering ordinal-level strength-of-preference scores and could feasibly use the paired-comparison/bi-directional Leaning Scale to do so. PMID:22494579
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Galaxy clusters in local Universe simulations without density constraints: a long uphill struggle
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.
2018-06-01
Galaxy clusters are excellent cosmological probes provided that their formation and evolution within the large scale environment are precisely understood. Therefore studies with simulated galaxy clusters have flourished. However detailed comparisons between simulated and observed clusters and their population - the galaxies - are complicated by the diversity of clusters and their surrounding environment. An original way initiated by Bertschinger as early as 1987, to legitimize the one-to-one comparison exercise down to the details, is to produce simulations constrained to resemble the cluster under study within its large scale environment. Subsequently several methods have emerged to produce simulations that look like the local Universe. This paper highlights one of these methods and its essential steps to get simulations that not only resemble the local Large Scale Structure but also that host the local clusters. It includes a new modeling of the radial peculiar velocity uncertainties to remove the observed correlation between the decreases of the simulated cluster masses and of the amount of data used as constraints with the distance from us. This method has the particularity to use solely radial peculiar velocities as constraints: no additional density constraints are required to get local cluster simulacra. The new resulting simulations host dark matter halos that match the most prominent local clusters such as Coma. Zoom-in simulations of the latter and of a volume larger than the 30h-1 Mpc radius inner sphere become now possible to study local clusters and their effects. Mapping the local Sunyaev-Zel'dovich and Sachs-Wolfe effects can follow.
Kinetic Alfvén Wave Generation by Large-scale Phase Mixing
NASA Astrophysics Data System (ADS)
Vásconez, C. L.; Pucci, F.; Valentini, F.; Servidio, S.; Matthaeus, W. H.; Malara, F.
2015-12-01
One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length dp may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to dp and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov–Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the role of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained.
NASA Astrophysics Data System (ADS)
Luginbuhl, Molly; Rundle, John B.; Hawkins, Angela; Turcotte, Donald L.
2018-01-01
Nowcasting is a new method of statistically classifying seismicity and seismic risk (Rundle et al. 2016). In this paper, the method is applied to the induced seismicity at the Geysers geothermal region in California and the induced seismicity due to fluid injection in Oklahoma. Nowcasting utilizes the catalogs of seismicity in these regions. Two earthquake magnitudes are selected, one large say M_{λ } ≥ 4, and one small say M_{σ } ≥ 2. The method utilizes the number of small earthquakes that occurs between pairs of large earthquakes. The cumulative probability distribution of these values is obtained. The earthquake potential score (EPS) is defined by the number of small earthquakes that has occurred since the last large earthquake, the point where this number falls on the cumulative probability distribution of interevent counts defines the EPS. A major advantage of nowcasting is that it utilizes "natural time", earthquake counts, between events rather than clock time. Thus, it is not necessary to decluster aftershocks and the results are applicable if the level of induced seismicity varies in time. The application of natural time to the accumulation of the seismic hazard depends on the applicability of Gutenberg-Richter (GR) scaling. The increasing number of small earthquakes that occur after a large earthquake can be scaled to give the risk of a large earthquake occurring. To illustrate our approach, we utilize the number of M_{σ } ≥ 2.75 earthquakes in Oklahoma to nowcast the number of M_{λ } ≥ 4.0 earthquakes in Oklahoma. The applicability of the scaling is illustrated during the rapid build-up of injection-induced seismicity between 2012 and 2016, and the subsequent reduction in seismicity associated with a reduction in fluid injections. The same method is applied to the geothermal-induced seismicity at the Geysers, California, for comparison.
DIF Analysis with Multilevel Data: A Simulation Study Using the Latent Variable Approach
ERIC Educational Resources Information Center
Jin, Ying; Eason, Hershel
2016-01-01
The effects of mean ability difference (MAD) and short tests on the performance of various DIF methods have been studied extensively in previous simulation studies. Their effects, however, have not been studied under multilevel data structure. MAD was frequently observed in large-scale cross-country comparison studies where the primary sampling…
A Cross-Cultural Comparison of Student Learning Patterns in Higher Education
ERIC Educational Resources Information Center
Marambe, Kosala N.; Vermunt, Jan D.; Boshuizen, Henny P. A.
2012-01-01
The aim of this study was to compare student learning patterns in higher education across different cultures. A meta-analysis was performed on three large-scale studies that had used the same research instrument: the Inventory of learning Styles (ILS). The studies were conducted in the two Asian countries Sri Lanka and Indonesia and the European…
ERIC Educational Resources Information Center
Huang, Xiaoting; Wilson, Mark; Wang, Lei
2016-01-01
In recent years, large-scale international assessments have been increasingly used to evaluate and compare the quality of education across regions and countries. However, measurement variance between different versions of these assessments often posts threats to the validity of such cross-cultural comparisons. In this study, we investigated the…
ERIC Educational Resources Information Center
Brunk-Chavez, Beth; Pigg, Stacey; Moore, Jessie; Rosinski, Paula; Grabill, Jeffrey T.
2018-01-01
To speak to diverse audiences about how people learn to write and how writing works inside and outside the academy, we must conduct research across geographical, institutional, and cultural contexts as well as research that enables comparison when appropriate. Large-scale empirical research is useful for both of these moves; however, we must…
Restoration of bottomland hardwood forests across a treatment intensity gradient
John A. Stanturf; Emile S. Gardiner; James P. Shepard; Callie J. Schweitzer; C. Jeffrey Portwood; Lamar C. Jr. Dorris
2009-01-01
Large-scale restoration of bottomland hardwood forests in the Lower Mississippi Alluvial Valley (USA) under federal incentive programs, begun in the 1990s, initially achieved mixed results. We report here on a comparison of four restoration techniques in terms of survival, accretion of vertical structure, and woody species diversity. The range of treatment intensity...
Restoring bottomland hardwood forests: A comparison of four techniques
John A. Stanturf; Emile S. Cardiner; James P. Shepard; Callie J. Schweitzer; C. Jeffrey Portwood; Lamar Dorris
2004-01-01
Large-scale afforestation of former agricultural lands in the Lower Mississippi Alluvial Valley (LMAV) is one of the largest forest restoration efforts in the world and continues to attract interest from landowners, policy makers, scientists, and managers. The decision by many landowners to afforest these lands has been aided in part by the increased availability of...
Callie Jo Schweitzer; John A. Stanturf
1999-01-01
Reforesting abandoned land in the lower Mississippi alluvial valley has attracted heightened attention. Currently, federal cost share programs, such as the Wetland Reserve Program and the Conservation Reserve Program, are enticing landowners to consider reforesting lands that are marginally productive for agriculture. This study examined four reforestation techniques...
Large-scale systems: Complexity, stability, reliability
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1975-01-01
After showing that a complex dynamic system with a competitive structure has highly reliable stability, a class of noncompetitive dynamic systems for which competitive models can be constructed is defined. It is shown that such a construction is possible in the context of the hierarchic stability analysis. The scheme is based on the comparison principle and vector Liapunov functions.
Multigrid Equation Solvers for Large Scale Nonlinear Finite Element Simulations
1999-01-01
purpose of the second partitioning phase , on each SMP, is to minimize the communication within the SMP; even if a multi - threaded matrix vector product...8.7 Comparison of model with experimental data for send phase of matrix vector product on ne grid...140 8.4 Matrix vector product phase times : : : : : : : : : : : : : : : : : : : : : : : 145 9.1 Flat and
The Age Parameters of the Starting Demographic Events across Russian Generations
ERIC Educational Resources Information Center
Mitrofanova, E. S.
2016-01-01
This article presents comparisons of the ages and facts of starting demographic events in Russia based on the findings of three large-scale surveys: the European Social Survey, 2006; the Generations and Gender Survey, 2004, 2007, and 2011; and Person, Family, Society, 2013. This study focuses on the intergenerational and gender differences in the…
NASA Astrophysics Data System (ADS)
Wang, Ke; Testi, Leonardo; Burkert, Andreas; Walmsley, C. Malcolm; Beuther, Henrik; Henning, Thomas
2016-09-01
Large-scale gaseous filaments with lengths up to the order of 100 pc are on the upper end of the filamentary hierarchy of the Galactic interstellar medium (ISM). Their association with respect to the Galactic structure and their role in Galactic star formation are of great interest from both an observational and theoretical point of view. Previous “by-eye” searches, combined together, have started to uncover the Galactic distribution of large filaments, yet inherent bias and small sample size limit conclusive statistical results from being drawn. Here, we present (1) a new, automated method for identifying large-scale velocity-coherent dense filaments, and (2) the first statistics and the Galactic distribution of these filaments. We use a customized minimum spanning tree algorithm to identify filaments by connecting voxels in the position-position-velocity space, using the Bolocam Galactic Plane Survey spectroscopic catalog. In the range of 7\\buildrel{\\circ}\\over{.} 5≤slant l≤slant 194^\\circ , we have identified 54 large-scale filaments and derived mass (˜ {10}3{--}{10}5 {M}⊙ ), length (10-276 pc), linear mass density (54-8625 {M}⊙ pc-1), aspect ratio, linearity, velocity gradient, temperature, fragmentation, Galactic location, and orientation angle. The filaments concentrate along major spiral arms. They are widely distributed across the Galactic disk, with 50% located within ±20 pc from the Galactic mid-plane and 27% run in the center of spiral arms. An order of 1% of the molecular ISM is confined in large filaments. Massive star formation is more favorable in large filaments compared to elsewhere. This is the first comprehensive catalog of large filaments that can be useful for a quantitative comparison with spiral structures and numerical simulations.
Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba
2017-12-23
Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.
Comparison Analysis among Large Amount of SNS Sites
NASA Astrophysics Data System (ADS)
Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro
In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings and their comments. Besides, they become activated when hub users with high degree do not behave actively on the sites with high value of friend aggregation rate and high value of friend coverage rate. On the other hand, activation emerges when hub users behave actively on the sites with low value of friend aggregation rate and high value of friend coverage rate. Finally, we observe SNS sites which are increasing the number of users considerably, from the viewpoint of network structure, and extract characteristics of high growth SNS sites. As a result of discrimination on the basis of the decision tree analysis, we can recognize the high growth SNS sites with a high degree of accuracy. Besides, this approach suggests mixi and the other small-scale SNS sites have different character trait.
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
NASA Astrophysics Data System (ADS)
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
NASA Astrophysics Data System (ADS)
Usman, Muhammad
2018-04-01
Bismide semiconductor materials and heterostructures are considered a promising candidate for the design and implementation of photonic, thermoelectric, photovoltaic, and spintronic devices. This work presents a detailed theoretical study of the electronic and optical properties of strongly coupled GaBixAs1 -x /GaAs multiple quantum well (MQW) structures. Based on a systematic set of large-scale atomistic tight-binding calculations, our results reveal that the impact of atomic-scale fluctuations in alloy composition is stronger than the interwell coupling effect, and plays an important role in the electronic and optical properties of the investigated MQW structures. Independent of QW geometry parameters, alloy disorder leads to a strong confinement of charge carriers, a large broadening of the hole energies, and a red-shift in the ground-state transition wavelength. Polarization-resolved optical transition strengths exhibit a striking effect of disorder, where the inhomogeneous broadening could exceed an order of magnitude for MQWs, in comparison to a factor of about 3 for single QWs. The strong influence of alloy disorder effects persists when small variations in the size and composition of MQWs typically expected in a realistic experimental environment are considered. The presented results highlight the limited scope of continuum methods and emphasize on the need for large-scale atomistic approaches to design devices with tailored functionalities based on the novel properties of bismide materials.
Large-eddy simulation of nitrogen injection at trans- and supercritical conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Hagen; Pfitzner, Michael; Niedermeier, Christoph A.
2016-01-15
Large-eddy simulations (LESs) of cryogenic nitrogen injection into a warm environment at supercritical pressure are performed and real-gas thermodynamics models and subgrid-scale (SGS) turbulence models are evaluated. The comparison of different SGS models — the Smagorinsky model, the Vreman model, and the adaptive local deconvolution method — shows that the representation of turbulence on the resolved scales has a notable effect on the location of jet break-up, whereas the particular modeling of unresolved scales is less important for the overall mean flow field evolution. More important are the models for the fluid’s thermodynamic state. The injected fluid is either inmore » a supercritical or in a transcritical state and undergoes a pseudo-boiling process during mixing. Such flows typically exhibit strong density gradients that delay the instability growth and can lead to a redistribution of turbulence kinetic energy from the radial to the axial flow direction. We evaluate novel volume-translation methods on the basis of the cubic Peng-Robinson equation of state in the framework of LES. At small extra computational cost, their application considerably improves the simulation results compared to the standard formulation. Furthermore, we found that the choice of inflow temperature is crucial for the reproduction of the experimental results and that heat addition within the injector can affect the mean flow field in comparison to results with an adiabatic injector.« less
UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.
Meinicke, Peter
2009-09-02
Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.
Hermann, Olena; Schmidt, Simone B; Boltzmann, Melanie; Rollnik, Jens D
2018-05-01
To calculate scale performance of the newly developed Hessisch Oldendorf Fall Risk Scale (HOSS) for classifying fallers and non-fallers in comparison with the Risk of Falling Scale by Huhn (FSH), a frequently used assessment tool. A prospective observational trail was conducted. The study was performed in a large specialized neurological rehabilitation facility. The study population ( n = 690) included neurological and neurosurgery patients during neurological rehabilitation with varying levels of disability. Around the half of the study patients were independent and dependent in the activities of daily living (ADL), respectively. Fall risk of each patient was assessed by HOSS and FSH within the first seven days after admission. Event of fall during rehabilitation was compared with HOSS and FSH scores as well as the according fall risk. Scale performance including sensitivity and specificity was calculated for both scales. A total of 107 (15.5%) patients experienced at least one fall. In general, fallers were characterized by an older age, a prolonged length of stay, and a lower Barthel Index (higher dependence in the ADL) on admission than non-fallers. The verification of fall prediction for both scales showed a sensitivity of 83% and a specificity of 64% for the HOSS scale, and a sensitivity of 98% with a specificity of 12% for the FSH scale, respectively. The HOSS shows an adequate sensitivity, a higher specificity and therefore a better scale performance than the FSH. Thus, the HOSS might be superior to existing assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witzke, B.J.
1993-03-01
Four large-scale (2--8 Ma) T-R sedimentary sequences of M. Ord. age (late Chaz.-Sherm.) were delimited by Witzke Kolata (1980) in the Iowa area, each bounded by local to regional unconformity/disconformity surfaces. These encompass both siliciclastic and carbonate intervals, in ascending order: (1) St. Peter-Glenwood fms., (2) Platteville Fm., (3) Decorah Fm., (4) Dunleith/upper Decorah fms. Finer-scale resolution of depth-related depositional features has led to regional recognition of smaller-scale shallowing-upward cyclicity contained within each large-scale sequence. Such smaller-scale cyclicity encompasses stratigraphic intervals of 1--10 m thickness, with estimated durations of 0.5--1.5 Ma. The St. Peter Sandst. has long been regarded asmore » a classic transgressive sheet sand. However, four discrete shallowing-upward packages characterize the St. Peter-Glenwood interval regionally (IA, MN, NB, KS), including western facies displaying coarsening-upward sandstone packages with condensed conodont-rich brown shale and phosphatic sediments in their lower part (local oolitic ironstone), commonly above pyritic hardgrounds. Regional continuity of small-scale cyclic patterns in M. Ord. strata of the Iowa area may suggest eustatic controls; this can be tested through inter-regional comparisons.« less
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
NASA Astrophysics Data System (ADS)
Karavosov, R. K.; Prozorov, A. G.
2012-01-01
We have investigated the spectra of pressure pulsations in the near field of the open working section of the wind tunnel with a vortex flow behind the tunnel blower formed like the flow behind the hydroturbine of a hydraulic power plant. We have made a comparison between the measurement data for pressure pulsations and the air stream velocity in tunnels of the above type and in tunnels in which a large-scale vortex structure behind the blower is not formed. It has been established that the large-scale vortex formation in the incompressible medium behind the blade system in the wind tunnel is a source of narrow-band acoustic radiation capable of exciting resonance self-oscillations in the tunnel channel.
Big Data Analytics with Datalog Queries on Spark.
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2016-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.
Big Data Analytics with Datalog Queries on Spark
Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo
2017-01-01
There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296
NASA Astrophysics Data System (ADS)
Tiselj, Iztok
2014-12-01
Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.
Intermittency measurement in two-dimensional bacterial turbulence
NASA Astrophysics Data System (ADS)
Qiu, Xiang; Ding, Long; Huang, Yongxiang; Chen, Ming; Lu, Zhiming; Liu, Yulu; Zhou, Quan
2016-06-01
In this paper, an experimental velocity database of a bacterial collective motion, e.g., Bacillus subtilis, in turbulent phase with volume filling fraction 84 % provided by Professor Goldstein at Cambridge University (UK), was analyzed to emphasize the scaling behavior of this active turbulence system. This was accomplished by performing a Hilbert-based methodology analysis to retrieve the scaling property without the β -limitation. A dual-power-law behavior separated by the viscosity scale ℓν was observed for the q th -order Hilbert moment Lq(k ) . This dual-power-law belongs to an inverse-cascade since the scaling range is above the injection scale R , e.g., the bacterial body length. The measured scaling exponents ζ (q ) of both the small-scale (k >kν ) and large-scale (k
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Moving contact lines on vibrating surfaces
NASA Astrophysics Data System (ADS)
Solomenko, Zlatko; Spelt, Peter; Scott, Julian
2017-11-01
Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.
ERIC Educational Resources Information Center
Silva-Maceda, Gabriela; Arjona-Villicaña, P. David; Castillo-Barrera, F. Edgar
2016-01-01
Learning to program is a complex task, and the impact of different pedagogical approaches to teach this skill has been hard to measure. This study examined the performance data of seven cohorts of students (N = 1168) learning programming under three different pedagogical approaches. These pedagogical approaches varied either in the length of the…
A Human Systems Integration Approach to Energy Efficiency in Ground Transportation
2015-12-01
Granite Construction Organizational Structure .........................................53 Figure 7. A Comparison of USMC Structure to Granite Construction...Caterpillar Corporation and the implementation and use of their telematics systems within a company called Granite Construction. Granite Construction...profit over 250 million dollars annually. In addition, similar to the USMC, Granite Construction handles both large and small scale projects in a
Use of a PhET Interactive Simulation in General Chemistry Laboratory: Models of the Hydrogen Atom
ERIC Educational Resources Information Center
Clark, Ted M.; Chamberlain, Julia M.
2014-01-01
An activity supporting the PhET interactive simulation, Models of the Hydrogen Atom, has been designed and used in the laboratory portion of a general chemistry course. This article describes the framework used to successfully accomplish implementation on a large scale. The activity guides students through a comparison and analysis of the six…
LaWen T. Hollingsworth; Laurie L. Kurth; Bernard R. Parresol; Roger D. Ottmar; Susan J. Prichard
2012-01-01
Landscape-scale fire behavior analyses are important to inform decisions on resource management projects that meet land management objectives and protect values from adverse consequences of fire. Deterministic and probabilistic geospatial fire behavior analyses are conducted with various modeling systems including FARSITE, FlamMap, FSPro, and Large Fire Simulation...
ERIC Educational Resources Information Center
Longford, Nicholas T.
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Impact of douglas-fir tussock moth... color aerial phtography evaluates mortality
Steven L. Wert; Boyd E. Wickman
1970-01-01
Thorough evaluation of insect impact on forest stands is difficult and expensive on the ground. In a study of tree damage following Douglas-fir tussock moth defoliation in Modoc County, California, large-scale (1:1,584)70-mm. color aerial photography was an effective sampling tool and took lesstime and expense than ground methods. Comparison of the photo...
Callie J. Schweitzer; John A. Stanturf; James P. Shepard; Timothy M. Wilkins; C. Jeffery Portwood; Lamar C., Jr. Dorris
1997-01-01
In the Lower Mississippi Alluvial Valley (LMAV), restoring bottomland hardwood forests has attracted heightened interest. The impetus involves not only environmental and aesthetic benefits, but also sound economics. Financial incentives to restore forested wetlands in the LMAV can come from federal cost share programs such as the Conservation Reserve Program and the...
NASA Astrophysics Data System (ADS)
Heimann, M.; Prentice, I. C.; Foley, J.; Hickler, T.; Kicklighter, D. W.; McGuire, A. D.; Melillo, J. M.; Ramankutty, N.; Sitch, S.
2001-12-01
Models of biophysical and biogeochemical proceses are being used -either offline or in coupled climate-carbon cycle (C4) models-to assess climate- and CO2-induced feedbacks on atmospheric CO2. Observations of atmospheric CO2 concentration, and supplementary tracers including O2 concentrations and isotopes, offer unique opportunities to evaluate the large-scale behaviour of models. Global patterns, temporal trends, and interannual variability of the atmospheric CO2 concentration and its seasonal cycle provide crucial benchmarks for simulations of regionally-integrated net ecosystem exchange; flux measurements by eddy correlation allow a far more demanding model test at the ecosystem scale than conventional indicators, such as measurements of annual net primary production; and large-scale manipulations, such as the Duke Forest Free Air Carbon Enrichment (FACE) experiment, give a standard to evaluate modelled phenomena such as ecosystem-level CO2 fertilization. Model runs including historical changes of CO2, climate and land use allow comparison with regional-scale monthly CO2 balances as inferred from atmospheric measurements. Such comparisons are providing grounds for some confidence in current models, while pointing to processes that may still be inadequately treated. Current plans focus on (1) continued benchmarking of land process models against flux measurements across ecosystems and experimental findings on the ecosystem-level effects of enhanced CO2, reactive N inputs and temperature; (2) improved representation of land use, forest management and crop metabolism in models; and (3) a strategy for the evaluation of C4 models in a historical observational context.
Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
2017-11-21
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
NASA Technical Reports Server (NTRS)
Navon, I. M.
1984-01-01
A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.
2017-10-01
Facility is a large-scale cascade that allows detailed flow field surveys and blade surface measurements.10–12 The facility has a continuous run ...structured grids at 2 flow conditions, cruise and takeoff, of the VSPT blade . Computations were run in parallel on a Department of Defense...RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable-Speed Power- Turbine Blade Operating with Low Inlet Turbulence Levels
1982-05-01
in May 1976, and, by July 1976, all sampling techniques were employed. In addition to routine displays of data analysis such as frequency tables and...amphibian and reptile communities in large aquatic habitats in Florida, comparison with similar herpetofaunal assemblages or populations is not possible... field environment was initiated at Lake Conway near Orlando, Fla., to study the effectiveness of the fish as a biological macrophyte control agent. A
NASA Technical Reports Server (NTRS)
Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.
1997-01-01
Cloud microphysical parameterizations have attracted a great deal of attention in recent years due to their effect on cloud radiative properties and cloud-related hydrological processes in large-scale models. The parameterization of cirrus particle size has been demonstrated as an indispensable component in the climate feedback analysis. Therefore, global-scale, long-term observations of cirrus particle sizes are required both as a basis of and as a validation of parameterizations for climate models. While there is a global scale, long-term survey of water cloud droplet sizes (Han et al.), there is no comparable study for cirrus ice crystals. This study is an effort to supply such a data set.
Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.
Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.
Additional Results of Glaze Icing Scaling in SLD Conditions
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching
2016-01-01
New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 inches and the scale model had a chord of 21 inches. Reference tests were run with airspeeds of 100 and 130.3 knots and with MVD's of 85 and 170 microns. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number W (sub eL). The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the non-dimensional water-film thickness expression and the film Weber number W (sub ef). All tests were conducted at 0 degrees angle of arrival. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For non-dimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-dimensional ice shape profiles at any selected span-wise location from the high fidelity 3-dimensional scanned ice shapes obtained in the IRT.
Additional Results of Glaze Icing Scaling in SLD Conditions
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching
2016-01-01
New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 in. and the scale model had a chord of 21 in. Reference tests were run with airspeeds of 100 and 130.3 kn and with MVD's of 85 and 170 micron. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number WeL. The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the nondimensional water-film thickness expression and the film Weber number Wef. All tests were conducted at 0 deg AOA. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For nondimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-D ice shape profiles at any selected span-wise location from the high fidelity 3-D scanned ice shapes obtained in the IRT.
Assessing the importance of internal tide scattering in the deep ocean
NASA Astrophysics Data System (ADS)
Haji, Maha; Peacock, Thomas; Carter, Glenn; Johnston, T. M. Shaun
2014-11-01
Tides are one of the main sources of energy input to the deep ocean, and the pathways of energy transfer from barotropic tides to turbulent mixing scales via internal tides are not well understood. Large-scale (low-mode) internal tides account for the bulk of energy extracted from barotropic tides and have been observed to propagate over 1000 km from their generation sites. We seek to examine the fate of these large-scale internal tides and the processes by which their energy is transferred, or ``scattered,'' to small-scale (high-mode) internal tides, which dissipate locally and are responsible for internal tide driven mixing. The EXperiment on Internal Tide Scattering (EXITS) field study conducted in 2010-2011 sought to examine the role of topographic scattering at the Line Islands Ridge. The scattering process was examined via data from three moorings equipped with moored profilers, spanning total depths of 3000--5000 m. The results of our field data analysis are rationalized via comparison to data from two- and three-dimensional numerical models and a two-dimensional analytical model based on Green function theory.
Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames
NASA Astrophysics Data System (ADS)
Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.
2017-11-01
Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.
NASA Astrophysics Data System (ADS)
Pinto, R.; Brouwer, R.; Patrício, J.; Abreu, P.; Marta-Pedroso, C.; Baeta, A.; Franco, J. N.; Domingos, T.; Marques, J. C.
2016-02-01
A large scale contingent valuation survey is conducted among residents in one of the largest river basins in Portugal to estimate the non-market benefits of the ecosystem services associated with implementation of the European Water Framework Directive (WFD). Statistical tests of public willingness to pay's sensitivity to scope and scale are carried out. Decreasing marginal willingness to pay (WTP) is found when asking respondents to value two water quality improvement scenarios (within sample comparison), from current moderate water quality conditions to good and subsequently excellent ecological status. However, insensitivity to scale is found when asking half of the respondents to value water quality improvements in the estuary only and the other half in the whole basin (between sample comparison). Although respondents living outside the river basin value water quality improvements significantly less than respondents inside the basin, no spatial heterogeneity can be detected within the basin between upstream and downstream residents. This finding has important implications for spatial aggregation procedures across the population of beneficiaries living in the river basin to estimate its total economic value based on public WTP for the implementation of the WFD.
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Propeller aircraft interior noise model utilization study and validation
NASA Technical Reports Server (NTRS)
Pope, L. D.
1984-01-01
Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.
NASA Astrophysics Data System (ADS)
Phillips, M.; Denning, A. S.; Randall, D. A.; Branson, M.
2016-12-01
Multi-scale models of the atmosphere provide an opportunity to investigate processes that are unresolved by traditional Global Climate Models while at the same time remaining viable in terms of computational resources for climate-length time scales. The MMF represents a shift away from large horizontal grid spacing in traditional GCMs that leads to overabundant light precipitation and lack of heavy events, toward a model where precipitation intensity is allowed to vary over a much wider range of values. Resolving atmospheric motions on the scale of 4 km makes it possible to recover features of precipitation, such as intense downpours, that were previously only obtained by computationally expensive regional simulations. These heavy precipitation events may have little impact on large-scale moisture and energy budgets, but are outstanding in terms of interaction with the land surface and potential impact on human life. Three versions of the Community Earth System Model were used in this study; the standard CESM, the multi-scale `Super-Parameterized' CESM where large-scale parameterizations have been replaced with a 2D cloud-permitting model, and a multi-instance land version of the SP-CESM where each column of the 2D CRM is allowed to interact with an individual land unit. These simulations were carried out using prescribed Sea Surface Temperatures for the period from 1979-2006 with daily precipitation saved for all 28 years. Comparisons of the statistical properties of precipitation between model architectures and against observations from rain gauges were made, with specific focus on detection and evaluation of extreme precipitation events.
Rankings, Standards, and Competition: Task vs. Scale Comparisons
ERIC Educational Resources Information Center
Garcia, Stephen M.; Tor, Avishalom
2007-01-01
Research showing how upward social comparison breeds competitive behavior has so far conflated local comparisons in "task" performance (e.g. a test score) with comparisons on a more general "scale" (i.e. an underlying skill). Using a ranking methodology (Garcia, Tor, & Gonzalez, 2006) to separate task and scale comparisons, Studies 1-2 reveal that…
Microfluidic desalination techniques and their potential applications.
Roelofs, S H; van den Berg, A; Odijk, M
2015-09-07
In this review we discuss recent developments in the emerging research field of miniaturized desalination. Traditionally desalination is performed to convert salt water into potable water and research is focused on improving performance of large-scale desalination plants. Microfluidic desalination offers several new opportunities in comparison to macro-scale desalination, such as providing a platform to increase fundamental knowledge of ion transport on the nano- and microfluidic scale and new microfluidic sample preparation methods. This approach has also lead to the development of new desalination techniques, based on micro/nanofluidic ion-transport phenomena, which are potential candidates for up-scaling to (portable) drinking water devices. This review assesses microfluidic desalination techniques on their applications and is meant to contribute to further implementation of microfluidic desalination techniques in the lab-on-chip community.
Modulation of a methane Bunsen flame by upstream perturbations
NASA Astrophysics Data System (ADS)
de Souza, T. Cardoso; Bastiaans, R. J. M.; De Goey, L. P. H.; Geurts, B. J.
2017-04-01
In this paper the effects of an upstream spatially periodic modulation acting on a turbulent Bunsen flame are investigated using direct numerical simulations of the Navier-Stokes equations coupled with the flamelet generated manifold (FGM) method to parameterise the chemistry. The premixed Bunsen flame is spatially agitated with a set of coherent large-scale structures of specific wave-number, K. The response of the premixed flame to the external modulation is characterised in terms of time-averaged properties, e.g. the average flame height ⟨H⟩ and the flame surface wrinkling ⟨W⟩. Results show that the flame response is notably selective to the size of the length scales used for agitation. For example, both flame quantities ⟨H⟩ and ⟨W⟩ present an optimal response, in comparison with an unmodulated flame, when the modulation scale is set to relatively low wave-numbers, 4π/L ≲ K ≲ 6π/L, where L is a characteristic scale. At the agitation scales where the optimal response is observed, the average flame height, ⟨H⟩, takes a clearly defined minimal value while the surface wrinkling, ⟨W⟩, presents an increase by more than a factor of 2 in comparison with the unmodulated reference case. Combined, these two response quantities indicate that there is an optimal scale for flame agitation and intensification of combustion rates in turbulent Bunsen flames.
KINETIC ALFVÉN WAVE GENERATION BY LARGE-SCALE PHASE MIXING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vásconez, C. L.; Pucci, F.; Valentini, F.
One view of the solar wind turbulence is that the observed highly anisotropic fluctuations at spatial scales near the proton inertial length d{sub p} may be considered as kinetic Alfvén waves (KAWs). In the present paper, we show how phase mixing of large-scale parallel-propagating Alfvén waves is an efficient mechanism for the production of KAWs at wavelengths close to d{sub p} and at a large propagation angle with respect to the magnetic field. Magnetohydrodynamic (MHD), Hall magnetohydrodynamic (HMHD), and hybrid Vlasov–Maxwell (HVM) simulations modeling the propagation of Alfvén waves in inhomogeneous plasmas are performed. In the linear regime, the rolemore » of dispersive effects is singled out by comparing MHD and HMHD results. Fluctuations produced by phase mixing are identified as KAWs through a comparison of polarization of magnetic fluctuations and wave-group velocity with analytical linear predictions. In the nonlinear regime, a comparison of HMHD and HVM simulations allows us to point out the role of kinetic effects in shaping the proton-distribution function. We observe the generation of temperature anisotropy with respect to the local magnetic field and the production of field-aligned beams. The regions where the proton-distribution function highly departs from thermal equilibrium are located inside the shear layers, where the KAWs are excited, this suggesting that the distortions of the proton distribution are driven by a resonant interaction of protons with KAW fluctuations. Our results are relevant in configurations where magnetic-field inhomogeneities are present, as, for example, in the solar corona, where the presence of Alfvén waves has been ascertained.« less
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
Nakamura, T K M; Hasegawa, H; Daughton, W; Eriksson, S; Li, W Y; Nakamura, R
2017-11-17
Magnetic reconnection is believed to be the main driver to transport solar wind into the Earth's magnetosphere when the magnetopause features a large magnetic shear. However, even when the magnetic shear is too small for spontaneous reconnection, the Kelvin-Helmholtz instability driven by a super-Alfvénic velocity shear is expected to facilitate the transport. Although previous kinetic simulations have demonstrated that the non-linear vortex flows from the Kelvin-Helmholtz instability gives rise to vortex-induced reconnection and resulting plasma transport, the system sizes of these simulations were too small to allow the reconnection to evolve much beyond the electron scale as recently observed by the Magnetospheric Multiscale (MMS) spacecraft. Here, based on a large-scale kinetic simulation and its comparison with MMS observations, we show for the first time that ion-scale jets from vortex-induced reconnection rapidly decay through self-generated turbulence, leading to a mass transfer rate nearly one order higher than previous expectations for the Kelvin-Helmholtz instability.
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1982-01-01
An examination of the possibilities of using Landsat data to simulate NOAA-6 Advanced Very High Resolution Radiometer (AVHRR) data on two channels, as well as using actual NOAA-6 imagery, for large-scale hydrological studies is presented. A running average was obtained of 18 consecutive pixels of 1 km resolution taken by the Landsat scanners were scaled up to 8-bit data and investigated for different gray levels. AVHRR data comprising five channels of 10-bit, band-interleaved information covering 10 deg latitude were analyzed and a suitable pixel grid was chosen for comparison with the Landsat data in a supervised classification format, an unsupervised mode, and with ground truth. Landcover delineation was explored by removing snow, water, and cloud features from the cluster analysis, and resulted in less than 10% difference. Low resolution large-scale data was determined useful for characterizing some landcover features if weekly and/or monthly updates are maintained.
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Scaling effects in the impact response of graphite-epoxy composite beams
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.
1989-01-01
In support of crashworthiness studies on composite airframes and substructure, an experimental and analytical study was conducted to characterize size effects in the large deflection response of scale model graphite-epoxy beams subjected to impact. Scale model beams of 1/2, 2/3, 3/4, 5/6, and full scale were constructed of four different laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic. The beam specimens were subjected to eccentric axial impact loads which were scaled to provide homologous beam responses. Comparisons of the load and strain time histories between the scale model beams and the prototype should verify the scale law and demonstrate the use of scale model testing for determining impact behavior of composite structures. The nonlinear structural analysis finite element program DYCAST (DYnamic Crash Analysis of STructures) was used to model the beam response. DYCAST analysis predictions of beam strain response are compared to experimental data and the results are presented.
Turbulent Channel Flow Measurements with a Nano-scale Thermal Anemometry Probe
NASA Astrophysics Data System (ADS)
Bailey, Sean; Witte, Brandon
2014-11-01
Using a Nano-scale Thermal Anemometry Probe (NSTAP), streamwise velocity was measured in a turbulent channel flow wind tunnel at Reynolds numbers ranging from Reτ = 500 to Reτ = 4000 . Use of these probes results in the a sensing-length-to-viscous-length-scale ratio of just 5 at the highest Reynolds number measured. Thus measured results can be considered free of spatial filtering effects. Point statistics are compared to recently published DNS and LDV data at similar Reynolds numbers and the results are found to be in good agreement. However, comparison of the measured spectra provide further evidence of aliasing at long wavelengths due to application of Taylor's frozen flow hypothesis, with increased aliasing evident with increasing Reynolds numbers. In addition to conventional point statistics, the dissipative scales of turbulence are investigated with focus on the wall-dependent scaling. Results support the existence of a universal pdf distribution of these scales once scaled to account for large-scale anisotropy. This research is supported by KSEF Award KSEF-2685-RDE-015.
Multidisciplinary geoscientific experiments in central Europe
NASA Technical Reports Server (NTRS)
Bannert, D. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Studies were carried out in the fields of geology-pedology, coastal dynamics, geodesy-cartography, geography, and data processing. In geology-pedology, a comparison of ERTS image studies with extensive ground data led to a better understanding of the relationship between vegetation, soil, bedrock, and other geologic features. Findings in linear tectonics gave better insight in orogeny and ore deposit development for prospecting. Coastal studies proved the value of ERTS images for the updating of nautical charts, as well as small scale topographic maps. A plotter for large scale high speed image generation from CCT was developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Lin, Paul Tinphone
2009-01-01
This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less
Experimental and theoretical study of magnetohydrodynamic ship models.
Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
Experimental and theoretical study of magnetohydrodynamic ship models
Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques. PMID:28665941
Romano Foti; Jorge A. Ramirez; Thomas C. Brown
2012-01-01
Comparison of projected future water demand and supply across the conterminous United States indicates that, due to improving efficiency in water use, expected increases in population and economic activity do not by themselves pose a serious threat of large-scale water shortages. However, climate change can increase water demand and decrease water supply to the extent...
USDA-ARS?s Scientific Manuscript database
There is a need for simple surrogate estimates of insulin sensitivity in epidemiological studies of obese youth because the hyperinsulinemic-euglycemic clamp is not feasible on a large scale. Objectives: (i) To examine the triglyceride glucose (TyG) index (Ln[fasting triglycerides (mg/dL)'×'fasting ...
ERIC Educational Resources Information Center
Fernandes, Anthony; Kahn, Leslie H.; Civil, Marta
2017-01-01
In this article, we use multimodality to examine how bilingual students interact with an area task from the National Assessment of Educational Progress in task-based interviews. Using vignettes, we demonstrate how some of these students manipulate the concrete materials, and use gestures, as a primary form of structuring their explanations and…
Aziz Ebrahimi; Abdolkarim Zarei; Shaneka Lawson; Keith E. Woeste; M. J. M. Smulders
2016-01-01
Persian walnut (Juglans regia L.) is the world's most widely grown nut crop, but large-scale assessments and comparisons of the genetic diversity of the crop are notably lacking. To guide the conservation and utilization of Persian walnut genetic resources, genotypes (n = 189) from 25 different regions in 14 countries on...
ERIC Educational Resources Information Center
Masters, James S.
2010-01-01
With the need for larger and larger banks of items to support adaptive testing and to meet security concerns, large-scale item generation is a requirement for many certification and licensure programs. As part of the mass production of items, it is critical that the difficulty and the discrimination of the items be known without the need for…
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
NASA Astrophysics Data System (ADS)
Happel, T.; Navarro, A. Bañón; Conway, G. D.; Angioni, C.; Bernert, M.; Dunne, M.; Fable, E.; Geiger, B.; Görler, T.; Jenko, F.; McDermott, R. M.; Ryter, F.; Stroth, U.
2015-03-01
Additional electron cyclotron resonance heating (ECRH) is used in an ion-temperature-gradient instability dominated regime to increase R / L Te in order to approach the trapped-electron-mode instability regime. The radial ECRH deposition location determines to a large degree the effect on R / L Te . Accompanying scale-selective turbulence measurements at perpendicular wavenumbers between k⊥ = 4-18 cm-1 (k⊥ρs = 0.7-4.2) show a pronounced increase of large-scale density fluctuations close to the ECRH radial deposition location at mid-radius, along with a reduction in phase velocity of large-scale density fluctuations. Measurements are compared with results from linear and non-linear flux-matched gyrokinetic (GK) simulations with the gyrokinetic code GENE. Linear GK simulations show a reduction of phase velocity, indicating a pronounced change in the character of the dominant instability. Comparing measurement and non-linear GK simulation, as a central result, agreement is obtained in the shape of radial turbulence level profiles. However, the turbulence intensity is increasing with additional heating in the experiment, while gyrokinetic simulations show a decrease.
Spatial Fluctuations in the Diffuse Cosmic X-Ray Background. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Shafer, R. A.
1983-01-01
The bright, essentially isotropic, X-ray sky flux above 2 keV yields information on the universe at large distances. However, a definitive understanding of the origin of the flux is lacking. Some fraction of the total flux is contributed by active galactic nuclei and clusters of galaxies, but less than one percent of the total is contributed by the or approximately 3 keV band resolved sources, which is the band where the sky flux is directly observed. Parametric models of AGN (quasar) luminosity function evolution are examined. Most constraints are by the total sky flux. The acceptability of particular models hinges on assumptions currently not directly testable. The comparison with the Einstein Observatory 1 to keV low flux source counts is hampered by spectral uncertainties. A tentative measurement of a large scale dipole anisotropy is consistent with the velocity and direction derived from the dipole in the microwave background. The impact of the X-ray anisotropy limits for other scales on studies of large-scale structure in the universe is sketched. Models of the origins of the X-ray sky flux are reviewed, and future observational programs outlined.
Validation of the RAGE Hydrocode for Impacts into Volatile-Rich Targets
NASA Astrophysics Data System (ADS)
Plesko, C. S.; Asphaug, E.; Coker, R. F.; Wohletz, K. H.; Korycansky, D. G.; Gisler, G. R.
2007-12-01
In preparation for a detailed study of large-scale impacts into the Martian surface, we have validated the RAGE hydrocode (Gittings et al., in press, CSD) against a suite of experiments and statistical models. We present comparisons of hydrocode models to centimeter-scale gas gun impacts (Nakazawa et al. 2002), an underground nuclear test (Perret, 1971), and crater scaling laws (Holsapple 1993, O'Keefe and Ahrens 1993). We have also conducted model convergence and uncertainty analyses which will be presented. Results to date are encouraging for our current model goals, and indicate areas where the hydrocode may be extended in the future. This validation work is focused on questions related to the specific problem of large impacts into volatile-rich targets. The overall goal of this effort is to be able to realistically model large-scale Noachian, and possibly post- Noachian, impacts on Mars not so much to model the crater morphology as to understand the evolution of target volatiles in the post-impact regime, to explore how large craters might set the stage for post-impact hydro- geologic evolution both locally (in the crater subsurface) and globally, due to the redistribution of volatiles from the surface and subsurface into the atmosphere. This work is performed under the auspices of IGPP and the DOE at LANL under contracts W-7405-ENG-36 and DE-AC52-06NA25396. Effort by DK and EA is sponsored by NASA's Mars Fundamental Research Program.
NASA Astrophysics Data System (ADS)
Harris, Peter T.
1988-06-01
Large-scale bedforms (2-10 m in vertical and 10 2-10 3 m in horizontal dimensions) found in wide-mouthed estuaries are described. Different bedform types occur depending upon the local availability of sand. With an increasing sand supply, sand ribbons grade into elongate trains of sand waves and then form sandwave fields. Inshore, headland-associated sand banks are formed which multiply into en-echelon sand banks. Based upon a review of data on directions of sand transport from the Bristol Channel and Thames Estuary, U.K., and from Moreton Bay, Australia, charts of ebb- and flood-dominant transport zones are constructed for lower estuarine environments which have undergone different degrees of infilling. Linear sand banks are seen to delimit partially the boundaries between opposing sand transport zones. Transport paths demonstrate how sediments derived from outside of the estuary are dispersed through ebb and flood transport zones, to supply other areas of net deposition. A comparison between different estuaries reveals that variations in the compexity of ebb- and flood-dominant transport zones and the morphologies of large-scale bedforms are coupled with apparent changes in the relative amounts of sand available to each system. A model for the sequential infilling of estuaries and the evolution of large-scale bedforms is presented and applied to the interpretation of present day examples. Vertical sequences predicted to be generated by such bedform evolution are described and discussed, in terms of their preservation in the geological record.
Translational bioinformatics in the cloud: an affordable alternative
2010-01-01
With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073
A spatially explicit suspended-sediment load model for western Oregon
Wise, Daniel R.; O'Connor, Jim
2016-06-27
Knowledge of the regionally important patterns and factors in suspended-sediment sources and transport could support broad-scale, water-quality management objectives and priorities. Because of biases and limitations of this model, however, these results are most applicable for general comparisons and for broad areas such as large watersheds. For example, despite having similar area, precipitation, and land-use, the Umpqua River Basin generates 68 percent more suspended sediment than the Rogue River Basin, chiefly because of the large area of Coast Range sedimentary province in the Umpqua River Basin. By contrast, the Rogue River Basin contains a much larger area of Klamath terrane rocks, which produce significantly less suspended load, although recent fire disturbance (in 2002) has apparently elevated suspended sediment yields in the tributary Illinois River watershed. Fine-scaled analysis, however, will require more intensive, locally focused measurements.
Schmidt, Olga; Hausmann, Axel; Cancian de Araujo, Bruno; Sutrisno, Hari; Peggie, Djunijanti; Schmidt, Stefan
2017-01-01
Here we present a general collecting and preparation protocol for DNA barcoding of Lepidoptera as part of large-scale rapid biodiversity assessment projects, and a comparison with alternative preserving and vouchering methods. About 98% of the sequenced specimens processed using the present collecting and preparation protocol yielded sequences with more than 500 base pairs. The study is based on the first outcomes of the Indonesian Biodiversity Discovery and Information System (IndoBioSys). IndoBioSys is a German-Indonesian research project that is conducted by the Museum für Naturkunde in Berlin and the Zoologische Staatssammlung München, in close cooperation with the Research Center for Biology - Indonesian Institute of Sciences (RCB-LIPI, Bogor).
NASA Astrophysics Data System (ADS)
Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto
2016-02-01
Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.
Rossetto, Maurizio; Kooyman, Robert; Yap, Jia-Yee S.; Laffan, Shawn W.
2015-01-01
Seed dispersal is a key process in plant spatial dynamics. However, consistently applicable generalizations about dispersal across scales are mostly absent because of the constraints on measuring propagule dispersal distances for many species. Here, we focus on fleshy-fruited taxa, specifically taxa with large fleshy fruits and their dispersers across an entire continental rainforest biome. We compare species-level results of whole-chloroplast DNA analyses in sister taxa with large and small fruits, to regional plot-based samples (310 plots), and whole-continent patterns for the distribution of woody species with either large (more than 30 mm) or smaller fleshy fruits (1093 taxa). The pairwise genomic comparison found higher genetic distances between populations and between regions in the large-fruited species (Endiandra globosa), but higher overall diversity within the small-fruited species (Endiandra discolor). Floristic comparisons among plots confirmed lower numbers of large-fruited species in areas where more extreme rainforest contraction occurred, and re-colonization by small-fruited species readily dispersed by the available fauna. Species' distribution patterns showed that larger-fruited species had smaller geographical ranges than smaller-fruited species and locations with stable refugia (and high endemism) aligned with concentrations of large fleshy-fruited taxa, making them a potentially valuable conservation-planning indicator. PMID:26645199
Rossetto, Maurizio; Kooyman, Robert; Yap, Jia-Yee S; Laffan, Shawn W
2015-12-07
Seed dispersal is a key process in plant spatial dynamics. However, consistently applicable generalizations about dispersal across scales are mostly absent because of the constraints on measuring propagule dispersal distances for many species. Here, we focus on fleshy-fruited taxa, specifically taxa with large fleshy fruits and their dispersers across an entire continental rainforest biome. We compare species-level results of whole-chloroplast DNA analyses in sister taxa with large and small fruits, to regional plot-based samples (310 plots), and whole-continent patterns for the distribution of woody species with either large (more than 30 mm) or smaller fleshy fruits (1093 taxa). The pairwise genomic comparison found higher genetic distances between populations and between regions in the large-fruited species (Endiandra globosa), but higher overall diversity within the small-fruited species (Endiandra discolor). Floristic comparisons among plots confirmed lower numbers of large-fruited species in areas where more extreme rainforest contraction occurred, and re-colonization by small-fruited species readily dispersed by the available fauna. Species' distribution patterns showed that larger-fruited species had smaller geographical ranges than smaller-fruited species and locations with stable refugia (and high endemism) aligned with concentrations of large fleshy-fruited taxa, making them a potentially valuable conservation-planning indicator. © 2015 The Author(s).
Cunningham, Kenda; Singh, Akriti; Pandey Rana, Pooja; Brye, Laura; Alayon, Silvia; Lapping, Karin; Gautam, Bindu; Underwood, Carol; Klemm, Rolf D W
2017-10-01
The burden of undernutrition in South Asia is greater than anywhere else. Policies and programmatic efforts increasingly address health and non-health determinants of undernutrition. In Nepal, one large-scale integrated nutrition program, Suaahara, aimed to reduce undernutrition among women and children in the 1,000-day period, while simultaneously addressing inequities. In this study, we use household-level process evaluation data (N = 480) to assess levels of exposure to program inputs and levels of knowledge and practices related to health, nutrition, and water, sanitation, and hygiene (WASH). We also assess Suaahara's effect on the differences between disadvantaged (DAG) and non-disadvantaged households in exposure, knowledge, and practice indicators. All regression models were adjusted for potential confounders at the child-, maternal-, and household levels, as well as clustering. We found a higher prevalence of almost all exposure and knowledge indicators and some practice indicators in Suaahara areas versus comparison areas. A higher proportion of DAG households in Suaahara areas reported exposure, were knowledgeable, and practiced optimal behaviors related to nearly all maternal and child health, nutrition, and WASH indicators than DAG households in non-Suaahara areas and sometimes even than non-DAG households in Suaahara areas. Moreover, differences in some of these indicators between DAG and non-DAG households were significantly smaller in Suaahara areas than in comparison areas. These results indicate that large-scale integrated interventions can influence nutrition-related knowledge and practices, while simultaneously reducing inequities. © 2017 John Wiley & Sons Ltd.
Evaluating scaling models in biology using hierarchical Bayesian approaches
Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S
2009-01-01
Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621
Lanham, Brendan S; Vergés, Adriana; Hedge, Luke H; Johnston, Emma L; Poore, Alistair G B
2018-04-01
Coastal urbanization has led to large-scale transformation of estuaries, with artificial structures now commonplace. Boat moorings are known to reduce seagrass cover, but little is known about their effect on fish communities. We used underwater video to quantify abundance, diversity, composition and feeding behaviour of fish assemblages on two scales: with increasing distance from moorings on fine scales, and among locations where moorings were present or absent. Fish were less abundant in close proximity to boat moorings, and the species composition varied on fine scales, leading to lower predation pressure near moorings. There was no relationship at the location with seagrass. On larger scales, we detected no differences in abundance or community composition among locations where moorings were present or absent. These findings show a clear impact of moorings on fish and highlight the importance of fine-scale assessments over location-scale comparisons in the detection of the effects of artificial structures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Comparison of three large-eddy simulations of shock-induced turbulent separation bubbles
NASA Astrophysics Data System (ADS)
Touber, Emile; Sandham, Neil D.
2009-12-01
Three different large-eddy simulation investigations of the interaction between an impinging oblique shock and a supersonic turbulent boundary layer are presented. All simulations made use of the same inflow technique, specifically aimed at avoiding possible low-frequency interferences with the shock/boundary-layer interaction system. All simulations were run on relatively wide computational domains and integrated over times greater than twenty five times the period of the most commonly reported low-frequency shock-oscillation, making comparisons at both time-averaged and low-frequency-dynamic levels possible. The results confirm previous experimental results which suggested a simple linear relation between the interaction length and the oblique-shock strength if scaled using the boundary-layer thickness and wall-shear stress. All the tested cases show evidences of significant low-frequency shock motions. At the wall, energetic low-frequency pressure fluctuations are observed, mainly in the initial part of interaction.
Comparison of NGA-West2 directivity models
Spudich, Paul A.; Rowshandel, Badie; Shahi, Shrey; Baker, Jack W.; Chiou, Brian S-J
2014-01-01
Five directivity models have been developed based on data from the NGA-West2 database and based on numerical simulations of large strike-slip and reverse-slip earthquakes. All models avoid the use of normalized rupture dimension, enabling them to scale up to the largest earthquakes in a physically reasonable way. Four of the five models are explicitly “narrow-band” (in which the effect of directivity is maximum at a specific period that is a function of earthquake magnitude). Several strategies for determining the zero-level for directivity have been developed. We show comparisons of maps of the directivity amplification. This comparison suggests that the predicted geographic distributions of directivity amplification are dominated by effects of the models' assumptions, and more than one model should be used for ruptures dipping less than about 65 degrees.
NASA Astrophysics Data System (ADS)
Cook, B.; Anchukaitis, K. J.
2017-12-01
Comparative analyses of paleoclimate reconstructions and climate model simulations can provide valuable insights into past and future climate events. Conducting meaningful and quantitative comparisons, however, can be difficult for a variety of reasons. Here, we use tree-ring based hydroclimate reconstructions to discuss some best practices for paleoclimate-model comparisons, highlighting recent studies that have successfully used this approach. These analyses have improved our understanding of the Medieval-era megadroughts, ocean forcing of large scale drought patterns, and even climate change contributions to future drought risk. Additional work is needed, however, to better reconcile and formalize uncertainties across observed, modeled, and reconstructed variables. In this regard, process based forward models of proxy-systems will likely be a critical tool moving forward.
Review of the outer scale of the atmospheric turbulence
NASA Astrophysics Data System (ADS)
Ziad, Aziz
2016-07-01
Outer scale is a relevant parameter for the experimental performance evaluation of large telescopes. Different techniques have been used for the outer scale estimation. In situ measurements with radiosounding balloons have given very small values of outer scale. This latter has also been estimated directly at the ground level from the wavefront analysis with High Angular Resolution (HAR) techniques using interferometric or Shack-Hartmann or more generally AO systems data. Dedicated instruments have been also developed for the outer scale monitoring such as the Generalized Seeing Monitor (GSM) and the Monitor of Outer Scale Profile (MOSP). The measured values of outer scale from HAR techniques, GSM and MOSP are somewhat coherent and are larger than the in situ results. The main explanation of this difference comes from the definition of the outer scale itself. This paper aims to give a review in a non-exhaustive way of different techniques and instruments for the measurement of the outer scale. Comparisons of outer scale measurements will be discussed in the light of the different definitions of this parameter, the associated observable quantities and the atmospheric turbulence model as well.
Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.L.; Wilson, J.H.; Arwood, P.C.
The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts. PMID:28979228
NASA Astrophysics Data System (ADS)
Leifer, Ira; Culling, Daniel; Schneising, Oliver; Farrell, Paige; Buchwitz, Michael; Burrows, John P.
2013-08-01
The potent greenhouse gas, methane, CH4, has a wide variety of anthropogenic and natural sources. Fall, continental-scale (Florida to California) surface CH4 data were collected to investigate the importance of fossil fuel industrial (FFI) emissions in the South US. A total of 6600 measurements along 7020-km of roadways were made by flame ion detection gas chromatography onboard a nearly continuously moving recreational vehicle in 2010. A second, winter survey in Southern California measured CH4 at 2 Hz with a cavity ring-down spectrometer in 2012. Data revealed strong and persistent FFI CH4 sources associated with refining, oil/gas production, a presumed major pipeline leak, and a coal loading plant. Nocturnal CH4 mixing ratios tended to be higher than daytime values for similar sources, sometimes significantly, which was attributed to day/night meteorological differences, primarily changes in the boundary layer height. The highest CH4 mixing ratio (39 ppm) was observed near the Kern River Oil Field, California, which uses steam reinjection. FFI CH4 plume signatures were distinguished as stronger than other sources on local scales. On large (4°) scales, the CH4 trend was better matched spatially with FFI activity than wetland spatial patterns. Qualitative comparison of surface data with SCIAMACHY and GOSAT satellite retrievals showed agreement of the large-scale CH4 spatial patterns. Comparison with inventory models and seasonal winds suggests for some seasons and some portions of the Gulf of Mexico a non-negligible underestimation of FFI emissions. For other seasons and locations, qualitative interpretation is not feasible. Unambiguous quantitative source attribution is more complex, requiring transport modeling.
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts.
Large-Scale Topographic Features on Venus: A Comparison by Geological Mapping in Four Quadrangles
NASA Astrophysics Data System (ADS)
Ivanov, M. A.; Head, J. W.
2002-05-01
We have conducted geological mapping in four quadrangles under the NASA program of geological mapping of Venus. Two quadrangles portray large equidimensional lowlands (Lavinia, V55, and Atalanta, V4, Planitiae) and two more areas are characterized by a large corona (Quetzalpetlatl corona, QC, V66), and Lakshmi Planum (LP, V7). Geological mapping of these large-scale features allows for their broad comparisons by both sets of typical structures and sequences of events. The Planitiae share a number of similar characteristics. (1) Lavinia and Atalanta are broad quasi-circular lowlands 1-2 km deep. (2) The central portions of the basins lack both coronae and large volcanoes. (3) The belts of tectonic deformation characterize the central portions of the basins. (4) There is evidence in both lowlands that they subsided predominantly before the emplacement of regional plains. (5) Recent volcanism is shifted toward the periphery of the basins and occurred after or at the late stages the formation of the lowlands. The above characteristics of the lowlands are better reconciled with the scenario in which their formation is due to a broad-scale mantle downwelling that started relatively early in the visible geologic history of Venus. The QC and LP are elevated structures roughly comparable in size. The formation of QC is commonly attributed to large-scale mantle positive diapirism while the formation of LP remains controversial and both mantle upwelling and downwelling models exist. QC and LP have similar characteristics such as broadly circular shape in plan-view, association with regional highlands, associated relatively young volcanism, and a topographic moat bordering both QC and LP from the North. Despite the above similarities, the striking differences between QC and LP are obvious too. LP is crowned by the highest mountain ranges on Venus and QC is bordered from the North by a common belt of ridges. LP itself makes up a regional highland within the upland of Ishtar Terra while QC produces a much less significant topographic anomaly on the background of the highland of Lada Terra. Highly deformed, tessera-like, terrain apparently makes up the basement of LP, and QC formed in the tessera-free area. Volcanic activity is concentrated in the central portion of LP while QC is a regionally important center of young volcanism. These differences, which probably can not be accounted for by simple difference in the size of LP and QC, suggest non-similar modes of the formation of both regional structures and do not favor the upwelling models of the formation of LP.
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele
2017-01-01
SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.
Custom fit 3D-printed brain holders for comparison of histology with MRI in marmosets.
Guy, Joseph R; Sati, Pascal; Leibovitch, Emily; Jacobson, Steven; Silva, Afonso C; Reich, Daniel S
2016-01-15
MRI has the advantage of sampling large areas of tissue and locating areas of interest in 3D space in both living and ex vivo systems, whereas histology has the ability to examine thin slices of ex vivo tissue with high detail and specificity. Although both are valuable tools, it is currently difficult to make high-precision comparisons between MRI and histology due to large differences inherent to the techniques. A method combining the advantages would be an asset to understanding the pathological correlates of MRI. 3D-printed brain holders were used to maintain marmoset brains in the same orientation during acquisition of ex vivo MRI and pathologic cutting of the tissue. The results of maintaining this same orientation show that sub-millimeter, discrete neuropathological features in marmoset brain consistently share size, shape, and location between histology and ex vivo MRI, which facilitates comparison with serial imaging acquired in vivo. Existing methods use computational approaches sensitive to data input in order to warp histologic images to match large-scale features on MRI, but the new method requires no warping of images, due to a preregistration accomplished in the technique, and is insensitive to data formatting and artifacts in both MRI and histology. The simple method of using 3D-printed brain holders to match brain orientation during pathologic sectioning and MRI acquisition enables rapid and precise comparison of small features seen on MRI to their underlying histology. Published by Elsevier B.V.
Comparison between the land surface response of the ECMWF model and the FIFE-1987 data
NASA Technical Reports Server (NTRS)
Betts, Alan K.; Ball, John H.; Beljaars, Anton C. M.
1993-01-01
An averaged time series for the surface data for the 15 x 15 km FIFE site was prepared for the summer of 1987. Comparisons with 48-hr forecasts from the ECMWF model for extended periods in July, August, and October 1987 identified model errors in the incoming SW radiation in clear skies, the ground heat flux, the formulation of surface evaporation, the soil-moisture model, and the entrainment at boundary-layer top. The model clear-sky SW flux is too high at the surface by 5-10 percent. The ground heat flux is too large by a factor of 2 to 3 because of the large thermal capacity of the first soil layer (which is 7 cm thick), and a time truncation error. The surface evaporation was near zero in October 1987, rather than of order 70 W/sq m at noon. The surface evaporation falls too rapidly after rainfall, with a time-scale of a few days rather than the 7-10 d (or more) of the observations. On time-scales of more than a few days the specified 'climate layer' soil moisture, rather than the storage of precipitation, has a large control on the evapotranspiration. The boundary-layer-top entrainment is too low. This results in a moist bias in the boundary-layer mixing ratio of order 2 g/Kg in forecasts from an experimental analysis with nearly realistic surface fluxes; this because there is insufficient downward mixing of dry air.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
Meta-analysis on Macropore Flow Velocity in Soils
NASA Astrophysics Data System (ADS)
Liu, D.; Gao, M.; Li, H. Y.; Chen, X.; Leung, L. R.
2017-12-01
Macropore flow is ubiquitous in the soils and an important hydrologic process that is not well explained using traditional hydrologic theories. Macropore Flow Velocity (MFV) is an important parameter used to describe macropore flow and quantify its effects on runoff generation and solute transport. However, the dominant factors controlling MFV are still poorly understood and the typical ranges of MFV measured at the field are not defined clearly. To address these issues, we conducted a meta-analysis based on a database created from 246 experiments on MFV collected from 76 journal articles. For a fair comparison, a conceptually unified definition of MFV is introduced to convert the MFV measured with different approaches and at various scales including soil core, field, trench or hillslope scales. The potential controlling factors of MFV considered include scale, travel distance, hydrologic conditions, site factors, macropore morphologies, soil texture, and land use. The results show that MFV is about 2 3 orders of magnitude larger than the corresponding values of saturated hydraulic conductivity. MFV is much larger at the trench and hillslope scale than at the field profile and soil core scales and shows a significant positive correlation with the travel distance. Generally, higher irrigation intensity tends to trigger faster MFV, especially at field profile scale, where MFV and irrigation intensity have significant positive correlation. At the trench and hillslope scale, the presence of large macropores (diameter>10 mm) is a key factor determining MFV. The geometric mean of MFV for sites with large macropores was found to be about 8 times larger than those without large macropores. For sites with large macropores, MFV increases with the macropore diameter. However, no noticeable difference in MFV has been observed among different soil texture and land use. Comparing the existing equations to describe MFV, the Poiseuille equation significantly overestimated the observed values, while the Manning-type equations generate reasonable values. The insights from this study will shed light on future field campaigns and modeling of macropore flow.
NASA Astrophysics Data System (ADS)
Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.
2002-11-01
The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.
Cruz-Motta, Juan José; Miloslavich, Patricia; Palomo, Gabriela; Iken, Katrin; Konar, Brenda; Pohle, Gerhard; Trott, Tom; Benedetti-Cecchi, Lisandro; Herrera, César; Hernández, Alejandra; Sardi, Adriana; Bueno, Andrea; Castillo, Julio; Klein, Eduardo; Guerra-Castro, Edlin; Gobin, Judith; Gómez, Diana Isabel; Riosmena-Rodríguez, Rafael; Mead, Angela; Bigatti, Gregorio; Knowlton, Ann; Shirayama, Yoshihisa
2010-01-01
Assemblages associated with intertidal rocky shores were examined for large scale distribution patterns with specific emphasis on identifying latitudinal trends of species richness and taxonomic distinctiveness. Seventy-two sites distributed around the globe were evaluated following the standardized sampling protocol of the Census of Marine Life NaGISA project (www.nagisa.coml.org). There were no clear patterns of standardized estimators of species richness along latitudinal gradients or among Large Marine Ecosystems (LMEs); however, a strong latitudinal gradient in taxonomic composition (i.e., proportion of different taxonomic groups in a given sample) was observed. Environmental variables related to natural influences were strongly related to the distribution patterns of the assemblages on the LME scale, particularly photoperiod, sea surface temperature (SST) and rainfall. In contrast, no environmental variables directly associated with human influences (with the exception of the inorganic pollution index) were related to assemblage patterns among LMEs. Correlations of the natural assemblages with either latitudinal gradients or environmental variables were equally strong suggesting that neither neutral models nor models based solely on environmental variables sufficiently explain spatial variation of these assemblages at a global scale. Despite the data shortcomings in this study (e.g., unbalanced sample distribution), we show the importance of generating biological global databases for the use in large-scale diversity comparisons of rocky intertidal assemblages to stimulate continued sampling and analyses. PMID:21179546
Spatial scaling of bacterial community diversity at shallow hydrothermal vents: a global comparison
NASA Astrophysics Data System (ADS)
Pop Ristova, P.; Hassenrueck, C.; Molari, M.; Fink, A.; Bühring, S. I.
2016-02-01
Marine shallow hydrothermal vents are extreme environments, often characterized by discharge of fluids with e.g. high temperatures, low pH, and laden with elements toxic to higher organisms. They occur at continental margins around the world's oceans, but represent fragmented, isolated habitats of locally small areal coverage. Microorganisms contribute the main biomass at shallow hydrothermal vent ecosystems and build the basis of the food chain by autotrophic fixation of carbon both via chemosynthesis and photosynthesis, occurring simultaneously. Despite their importance and unique capacity to adapt to these extreme environments, little is known about the spatial scales on which the alpha- and beta-diversity of microbial communities vary at shallow vents, and how the geochemical habitat heterogeneity influences shallow vent biodiversity. Here for the first time we investigated the spatial scaling of microbial biodiversity patterns and their interconnectivity at geochemically diverse shallow vents on a global scale. This study presents data on the comparison of bacterial community structures on large (> 1000 km) and small (0.1 - 100 m) spatial scales as derived from ARISA and Illumina sequencing. Despite the fragmented global distribution of shallow hydrothermal vents, similarity of vent bacterial communities decreased with geographic distance, confirming the ubiquity of distance-decay relationship. Moreover, at all investigated vents, pH was the main factor locally structuring these communities, while temperature influenced both the alpha- and beta-diversity.
Large-Scale Spatial Distribution Patterns of Gastropod Assemblages in Rocky Shores
Miloslavich, Patricia; Cruz-Motta, Juan José; Klein, Eduardo; Iken, Katrin; Weinberger, Vanessa; Konar, Brenda; Trott, Tom; Pohle, Gerhard; Bigatti, Gregorio; Benedetti-Cecchi, Lisandro; Shirayama, Yoshihisa; Mead, Angela; Palomo, Gabriela; Ortiz, Manuel; Gobin, Judith; Sardi, Adriana; Díaz, Juan Manuel; Knowlton, Ann; Wong, Melisa; Peralta, Ana C.
2013-01-01
Gastropod assemblages from nearshore rocky habitats were studied over large spatial scales to (1) describe broad-scale patterns in assemblage composition, including patterns by feeding modes, (2) identify latitudinal pattern of biodiversity, i.e., richness and abundance of gastropods and/or regional hotspots, and (3) identify potential environmental and anthropogenic drivers of these assemblages. Gastropods were sampled from 45 sites distributed within 12 Large Marine Ecosystem regions (LME) following the NaGISA (Natural Geography in Shore Areas) standard protocol (www.nagisa.coml.org). A total of 393 gastropod taxa from 87 families were collected. Eight of these families (9.2%) appeared in four or more different LMEs. Among these, the Littorinidae was the most widely distributed (8 LMEs) followed by the Trochidae and the Columbellidae (6 LMEs). In all regions, assemblages were dominated by few species, the most diverse and abundant of which were herbivores. No latitudinal gradients were evident in relation to species richness or densities among sampling sites. Highest diversity was found in the Mediterranean and in the Gulf of Alaska, while highest densities were found at different latitudes and represented by few species within one genus (e.g. Afrolittorina in the Agulhas Current, Littorina in the Scotian Shelf, and Lacuna in the Gulf of Alaska). No significant correlation was found between species composition and environmental variables (r≤0.355, p>0.05). Contributing variables to this low correlation included invasive species, inorganic pollution, SST anomalies, and chlorophyll-a anomalies. Despite data limitations in this study which restrict conclusions in a global context, this work represents the first effort to sample gastropod biodiversity on rocky shores using a standardized protocol across a wide scale. Our results will generate more work to build global databases allowing for large-scale diversity comparisons of rocky intertidal assemblages. PMID:23967204
ERIC Educational Resources Information Center
Cheng, May Hung May; Wan, Zhi Hong
2016-01-01
Chinese students' excellent science performance in large-scale international comparisons contradicts the stereotype of the Chinese non-productive classroom learning environment and learners. Most of the existing explanations of this paradox are provided from the perspective of teaching and learning in a general sense, but little work can be found…
ERIC Educational Resources Information Center
Ramnarain, Umesh Dewnarain; Chanetsa, Tarisai
2016-01-01
This article reports on an analysis and comparison of three South African Grade 9 (13-14 years) Natural Sciences textbooks for the representation of nature of science (NOS). The analysis was framed by an analytical tool developed and validated by Abd-El-Khalick and a team of researchers in a large-scale study on the high school textbooks in the…
Fire tests for airplane interior materials
NASA Technical Reports Server (NTRS)
Tustin, E. A.
1980-01-01
Large scale, simulated fire tests of aircraft interior materials were carried out in salvaged airliner fuselage. Two "design" fire sources were selected: Jet A fuel ignited in fuselage midsection and trash bag fire. Comparison with six established laboratory fire tests show that some laboratory tests can rank materials according to heat and smoke production, but existing tests do not characterize toxic gas emissions accurately. Report includes test parameters and test details.
ERIC Educational Resources Information Center
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua
2012-01-01
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
2009-02-01
One plasma- derived AT product is Thrombate, produced by Bayer. Recombinant AT (rhAT) is made on a large scale in the milk of transgenic goats and is...infusions of rhAT to increase AT levels to 200 and 500% of normal, followed by infusions of endotoxin . AT dose dependently decreased tissue factor...injury. REFERENCES 1. Edmunds T, Van Patten SM, Pollock J, et al. Transgenically produced human antithrombin: structural and functional comparison to
Comparison of the Size of ADF Aircrew and US Army Personnel
2013-09-01
ABSTRACT Most aircraft that the Australian Defence Force (ADF) acquire are designed using anthropometric data from overseas populations. Often, the...Force (ADF) acquire are designed using anthropometric data from overseas military populations, and for many acquisitions the aircraft’s design has been...guided by United States (US) military anthropometric data . The most recent large scale survey of a US military population for which the data is
Measuring posttraumatic stress following childbirth: a critical evaluation of instruments.
Stramrood, Claire A I; Huis In 't Veld, Elisabeth M J; Van Pampus, Maria G; Berger, Leonard W A R; Vingerhoets, Ad J J M; Schultz, Willibrord C M Weijmar; Van den Berg, Paul P; Van Sonderen, Eric L P; Paarlberg, K Marieke
2010-03-01
To evaluate instruments used to assess posttraumatic stress disorder (PTSD) following childbirth with both quantitative (reliability analysis and factor analysis) and qualitative (comparison of operationalization) techniques. An unselected population of 428 women completed the Traumatic Event Scale-B (TES-B) and the PTSD Symptom Scale-Self Report (PSS-SR) 2-6 months after delivery. Assessment of internal consistency yielded similar results for the TES-B and PSS-SR (Cronbach's alpha = 0.87 and 0.82, respectively). Factor analysis revealed two rather than three DSM-IV symptom categories for both instruments: childbirth-related factors (re-experiencing/ avoidance) and symptoms of depression and anxiety (numbing/hyperarousal). Although the TES-B and the PSS-SR sum-scores show a strong relationship (Spearmans rho = 0.78), agreement between the instruments on the identification of PTSD cases is low (kappa = 0.24); discrepancy between TES-B and PSS-SR is largely due to differences in instruction to respondents, formulation of items, answer categories, and cut-off values. Large operationalization differences between TES-B and PSS-SR have been identified, i.e., in the formulation of questions, answer categories, cut-off values and instructions to respondents. Comparison between studies using different instruments for measuring PTSD following childbirth should be done with utmost caution.
NASA Astrophysics Data System (ADS)
Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.
2003-12-01
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
Strecker, Angela L; Casselman, John M; Fortin, Marie-Josée; Jackson, Donald A; Ridgway, Mark S; Abrams, Peter A; Shuter, Brian J
2011-07-01
Species present in communities are affected by the prevailing environmental conditions, and the traits that these species display may be sensitive indicators of community responses to environmental change. However, interpretation of community responses may be confounded by environmental variation at different spatial scales. Using a hierarchical approach, we assessed the spatial and temporal variation of traits in coastal fish communities in Lake Huron over a 5-year time period (2001-2005) in response to biotic and abiotic environmental factors. The association of environmental and spatial variables with trophic, life-history, and thermal traits at two spatial scales (regional basin-scale, local site-scale) was quantified using multivariate statistics and variation partitioning. We defined these two scales (regional, local) on which to measure variation and then applied this measurement framework identically in all 5 study years. With this framework, we found that there was no change in the spatial scales of fish community traits over the course of the study, although there were small inter-annual shifts in the importance of regional basin- and local site-scale variables in determining community trait composition (e.g., life-history, trophic, and thermal). The overriding effects of regional-scale variables may be related to inter-annual variation in average summer temperature. Additionally, drivers of fish community traits were highly variable among study years, with some years dominated by environmental variation and others dominated by spatially structured variation. The influence of spatial factors on trait composition was dynamic, which suggests that spatial patterns in fish communities over large landscapes are transient. Air temperature and vegetation were significant variables in most years, underscoring the importance of future climate change and shoreline development as drivers of fish community structure. Overall, a trait-based hierarchical framework may be a useful conservation tool, as it highlights the multi-scaled interactive effect of variables over a large landscape.
Kyriacou, Demetrios N; Dobrez, Debra; Parada, Jorge P; Steinberg, Justin M; Kahn, Adam; Bennett, Charles L; Schmitt, Brian P
2012-09-01
Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims.
Dobrez, Debra; Parada, Jorge P.; Steinberg, Justin M.; Kahn, Adam; Bennett, Charles L.; Schmitt, Brian P.
2012-01-01
Rapid public health response to a large-scale anthrax attack would reduce overall morbidity and mortality. However, there is uncertainty about the optimal cost-effective response strategy based on timing of intervention, public health resources, and critical care facilities. We conducted a decision analytic study to compare response strategies to a theoretical large-scale anthrax attack on the Chicago metropolitan area beginning either Day 2 or Day 5 after the attack. These strategies correspond to the policy options set forth by the Anthrax Modeling Working Group for population-wide responses to a large-scale anthrax attack: (1) postattack antibiotic prophylaxis, (2) postattack antibiotic prophylaxis and vaccination, (3) preattack vaccination with postattack antibiotic prophylaxis, and (4) preattack vaccination with postattack antibiotic prophylaxis and vaccination. Outcomes were measured in costs, lives saved, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs). We estimated that postattack antibiotic prophylaxis of all 1,390,000 anthrax-exposed people beginning on Day 2 after attack would result in 205,835 infected victims, 35,049 fulminant victims, and 28,612 deaths. Only 6,437 (18.5%) of the fulminant victims could be saved with the existing critical care facilities in the Chicago metropolitan area. Mortality would increase to 69,136 if the response strategy began on Day 5. Including postattack vaccination with antibiotic prophylaxis of all exposed people reduces mortality and is cost-effective for both Day 2 (ICER=$182/QALY) and Day 5 (ICER=$1,088/QALY) response strategies. Increasing ICU bed availability significantly reduces mortality for all response strategies. We conclude that postattack antibiotic prophylaxis and vaccination of all exposed people is the optimal cost-effective response strategy for a large-scale anthrax attack. Our findings support the US government's plan to provide antibiotic prophylaxis and vaccination for all exposed people within 48 hours of the recognition of a large-scale anthrax attack. Future policies should consider expanding critical care capacity to allow for the rescue of more victims. PMID:22845046
Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise
NASA Astrophysics Data System (ADS)
Kocheemoolayil, Joseph; Lele, Sanjiva
2014-11-01
Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.
The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum
NASA Astrophysics Data System (ADS)
Frith, W. J.; Outram, P. J.; Shanks, T.
2005-06-01
We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.
A Discretization Algorithm for Meteorological Data and its Parallelization Based on Hadoop
NASA Astrophysics Data System (ADS)
Liu, Chao; Jin, Wen; Yu, Yuting; Qiu, Taorong; Bai, Xiaoming; Zou, Shuilong
2017-10-01
In view of the large amount of meteorological observation data, the property is more and the attribute values are continuous values, the correlation between the elements is the need for the application of meteorological data, this paper is devoted to solving the problem of how to better discretize large meteorological data to more effectively dig out the hidden knowledge in meteorological data and research on the improvement of discretization algorithm for large scale data, in order to achieve data in the large meteorological data discretization for the follow-up to better provide knowledge to provide protection, a discretization algorithm based on information entropy and inconsistency of meteorological attributes is proposed and the algorithm is parallelized under Hadoop platform. Finally, the comparison test validates the effectiveness of the proposed algorithm for discretization in the area of meteorological large data.
Allometry indicates giant eyes of giant squid are not exceptional.
Schmitz, Lars; Motani, Ryosuke; Oufiero, Christopher E; Martin, Christopher H; McGee, Matthew D; Gamarra, Ashlee R; Lee, Johanna J; Wainwright, Peter C
2013-02-18
The eyes of giant and colossal squid are among the largest eyes in the history of life. It was recently proposed that sperm whale predation is the main driver of eye size evolution in giant squid, on the basis of an optical model that suggested optimal performance in detecting large luminous visual targets such as whales in the deep sea. However, it is poorly understood how the eye size of giant and colossal squid compares to that of other aquatic organisms when scaling effects are considered. We performed a large-scale comparative study that included 87 squid species and 237 species of acanthomorph fish. While squid have larger eyes than most acanthomorphs, a comparison of relative eye size among squid suggests that giant and colossal squid do not have unusually large eyes. After revising constants used in a previous model we found that large eyes perform equally well in detecting point targets and large luminous targets in the deep sea. The eyes of giant and colossal squid do not appear exceptionally large when allometric effects are considered. It is probable that the giant eyes of giant squid result from a phylogenetically conserved developmental pattern manifested in very large animals. Whatever the cause of large eyes, they appear to have several advantages for vision in the reduced light of the deep mesopelagic zone.
Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals
Schmittfull, Marcel; Vlah, Zvonimir
2016-11-28
Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less
NASA Astrophysics Data System (ADS)
Wheeler, C. E.; Mitchard, E. T.; Lewis, S. L.
2017-12-01
Restoring degraded and deforested tropical lands to sequester carbon is widely considered to offer substantial climate change mitigation opportunities, if conducted over large spatial scales. Despite this assertion, explicit estimates of how much carbon could be sequestered because of large-scale restoration are rare and have large uncertainties. This is principally due to the many different characteristics of land available for restoration, and different potential restoration activities, which together cause very different rates of carbon sequestration. For different restoration pathways: natural regeneration of degraded and secondary forest, timber plantations and agroforestry, we estimate carbon sequestration rates from the published literature. Then based on tropical restoration commitments made under the Bonn challenge and using carbon density maps, these carbon sequestration rates were used to predict total pan-tropical carbon sequestration to 2100. Restoration of degraded or secondary forest via natural regeneration offers the greatest carbon sequestration potential, considerably exceeding the carbon captured by either timber plantations or agroforestry. This is predominantly due to naturally regenerating forests representing a more permanent store of carbon in comparison to timber plantations and agroforestry land-use options, which, due to their rotational nature, result in the sequential return of carbon to the atmosphere. If the Bonn Challenge is to achieve its ambition of providing substantial climate change mitigation from restoration it must incorporate large areas of natural regeneration back to an intact forest state, otherwise it stands to be a missed opportunity in helping meet the Paris climate change goals.
Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...
2015-06-19
Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
RESOLVING THE ROTATION MEASURE OF THE M87 JET ON KILOPARSEC SCALES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algaba, J. C.; Asada, K.; Nakamura, M., E-mail: algaba@asiaa.sinica.edu.tw
2016-06-01
We investigate the distribution of Faraday rotation measure (RM) in the M87 jet at arcsecond scales by using archival polarimetric Very Large Array data at 8, 15, 22 and 43 GHz. We resolve the structure of the RM in several knots along the jet for the first time. We derive the power spectrum in the arcsecond-scale jet and find indications that the RM cannot be associated with a turbulent magnetic field with a 3D Kolmogorov spectrum. Our analysis indicates that the RM probed on jet scales has a significant contribution of a Faraday screen associated with the vicinity of themore » jet, in contrast with that on kiloparsec scales, typically assumed to be disconnected from the jet. Comparison with previous RM analyses suggests that the magnetic fields giving rise to the RMs observed in jet scales have different properties and are well less turbulent than those observed in the lobes.« less
NASA Technical Reports Server (NTRS)
Mcginnies, W. G. (Principal Investigator); Conn, J. S.; Haase, E. F.; Lepley, L. K.; Musick, H. B.; Foster, K. E.
1975-01-01
The author has identified the following significant results. Research results include a method for determining the reflectivities of natural areas from ERTS data taking into account sun angle and atmospheric effects on the radiance seen by the satellite sensor. Ground truth spectral signature data for various types of scenes, including ground with and without annuals, and various shrubs were collected. Large areas of varnished desert pavement are visible and mappable on ERTS and high altitude aircraft imagery. A large scale and a small scale vegetation pattern were found to be correlated with presence of desert pavement. A comparison of radiometric data with video recordings shows quantitatively that for most areas of desert vegetation, soils are the most influential factor in determining the signature of a scene. Additive and subtractive image processing techniques were applied in the dark room to enhance vegetational aspects of ERTS.
Mother Nature versus human nature: public compliance with evacuation and quarantine.
Manuell, Mary-Elise; Cukor, Jeffrey
2011-04-01
Effectively controlling the spread of contagious illnesses has become a critical focus of disaster planning. It is likely that quarantine will be a key part of the overall public health strategy utilised during a pandemic, an act of bioterrorism or other emergencies involving contagious agents. While the United States lacks recent experience of large-scale quarantines, it has considerable accumulated experience of large-scale evacuations. Risk perception, life circumstance, work-related issues, and the opinions of influential family, friends and credible public spokespersons all play a role in determining compliance with an evacuation order. Although the comparison is not reported elsewhere to our knowledge, this review of the principal factors affecting compliance with evacuations demonstrates many similarities with those likely to occur during a quarantine. Accurate identification and understanding of barriers to compliance allows for improved planning to protect the public more effectively. © 2011 The Author(s). Disasters © Overseas Development Institute, 2011.
Exhaustive identification of steady state cycles in large stoichiometric networks
Wright, Jeremiah; Wagner, Andreas
2008-01-01
Background Identifying cyclic pathways in chemical reaction networks is important, because such cycles may indicate in silico violation of energy conservation, or the existence of feedback in vivo. Unfortunately, our ability to identify cycles in stoichiometric networks, such as signal transduction and genome-scale metabolic networks, has been hampered by the computational complexity of the methods currently used. Results We describe a new algorithm for the identification of cycles in stoichiometric networks, and we compare its performance to two others by exhaustively identifying the cycles contained in the genome-scale metabolic networks of H. pylori, M. barkeri, E. coli, and S. cerevisiae. Our algorithm can substantially decrease both the execution time and maximum memory usage in comparison to the two previous algorithms. Conclusion The algorithm we describe improves our ability to study large, real-world, biochemical reaction networks, although additional methodological improvements are desirable. PMID:18616835
Wiley, Joshua S; Shelley, Jacob T; Cooks, R Graham
2013-07-16
We describe a handheld, wireless low-temperature plasma (LTP) ambient ionization source and its performance on a benchtop and a miniature mass spectrometer. The source, which is inexpensive to build and operate, is battery-powered and utilizes miniature helium cylinders or air as the discharge gas. Comparison of a conventional, large-scale LTP source against the handheld LTP source, which uses less helium and power than the large-scale version, revealed that the handheld source had similar or slightly better analytical performance. Another advantage of the handheld LTP source is the ability to quickly interrogate a gaseous, liquid, or solid sample without requiring any setup time. A small, 7.4-V Li-polymer battery is able to sustain plasma for 2 h continuously, while the miniature helium cylinder supplies gas flow for approximately 8 continuous hours. Long-distance ion transfer was achieved for distances up to 1 m.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
The global gridded crop model intercomparison: Data and modeling protocols for Phase 1 (v1.0)
Elliott, J.; Müller, C.; Deryng, D.; ...
2015-02-11
We present protocols and input data for Phase 1 of the Global Gridded Crop Model Intercomparison, a project of the Agricultural Model Intercomparison and Improvement Project (AgMIP). The project consist of global simulations of yields, phenologies, and many land-surface fluxes using 12–15 modeling groups for many crops, climate forcing data sets, and scenarios over the historical period from 1948 to 2012. The primary outcomes of the project include (1) a detailed comparison of the major differences and similarities among global models commonly used for large-scale climate impact assessment, (2) an evaluation of model and ensemble hindcasting skill, (3) quantification ofmore » key uncertainties from climate input data, model choice, and other sources, and (4) a multi-model analysis of the agricultural impacts of large-scale climate extremes from the historical record.« less
Experimental Investigation of a Large-Scale Low-Boom Inlet Concept
NASA Technical Reports Server (NTRS)
Hirt, Stefanie M.; Chima, Rodrick V.; Vyas, Manan A.; Wayman, Thomas R.; Conners, Timothy R.; Reger, Robert W.
2011-01-01
A large-scale low-boom inlet concept was tested in the NASA Glenn Research Center 8- x 6- foot Supersonic Wind Tunnel. The purpose of this test was to assess inlet performance, stability and operability at various Mach numbers and angles of attack. During this effort, two models were tested: a dual stream inlet designed to mimic potential aircraft flight hardware integrating a high-flow bypass stream; and a single stream inlet designed to study a configuration with a zero-degree external cowl angle and to permit surface visualization of the vortex generator flow on the internal centerbody surface. During the course of the test, the low-boom inlet concept was demonstrated to have high recovery, excellent buzz margin, and high operability. This paper will provide an overview of the setup, show a brief comparison of the dual stream and single stream inlet results, and examine the dual stream inlet characteristics.
Molecular clouds and the large-scale structure of the galaxy
NASA Technical Reports Server (NTRS)
Thaddeus, Patrick; Stacy, J. Gregory
1990-01-01
The application of molecular radio astronomy to the study of the large-scale structure of the Galaxy is reviewed and the distribution and characteristic properties of the Galactic population of Giant Molecular Clouds (GMCs), derived primarily from analysis of the Columbia CO survey, and their relation to tracers of Population 1 and major spiral features are described. The properties of the local molecular interstellar gas are summarized. The CO observing programs currently underway with the Center for Astrophysics 1.2 m radio telescope are described, with an emphasis on projects relevant to future comparison with high-energy gamma-ray observations. Several areas are discussed in which high-energy gamma-ray observations by the EGRET (Energetic Gamma-Ray Experiment Telescope) experiment aboard the Gamma Ray Observatory will directly complement radio studies of the Milky Way, with the prospect of significant progress on fundamental issues related to the structure and content of the Galaxy.
Satellite measurements of large-scale air pollution - Methods
NASA Technical Reports Server (NTRS)
Kaufman, Yoram J.; Ferrare, Richard A.; Fraser, Robert S.
1990-01-01
A technique for deriving large-scale pollution parameters from NIR and visible satellite remote-sensing images obtained over land or water is described and demonstrated on AVHRR images. The method is based on comparison of the upward radiances on clear and hazy days and permits simultaneous determination of aerosol optical thickness with error Delta tau(a) = 0.08-0.15, particle size with error + or - 100-200 nm, and single-scattering albedo with error + or - 0.03 (for albedos near 1), all assuming accurate and stable satellite calibration and stable surface reflectance between the clear and hazy days. In the analysis of AVHRR images of smoke from a forest fire, good agreement was obtained between satellite and ground-based (sun-photometer) measurements of aerosol optical thickness, but the satellite particle sizes were systematically greater than those measured from the ground. The AVHRR single-scattering albedo agreed well with a Landsat albedo for the same smoke.
Hausmann, Axel; Cancian de Araujo, Bruno; Sutrisno, Hari; Peggie, Djunijanti; Schmidt, Stefan
2017-01-01
Abstract Here we present a general collecting and preparation protocol for DNA barcoding of Lepidoptera as part of large-scale rapid biodiversity assessment projects, and a comparison with alternative preserving and vouchering methods. About 98% of the sequenced specimens processed using the present collecting and preparation protocol yielded sequences with more than 500 base pairs. The study is based on the first outcomes of the Indonesian Biodiversity Discovery and Information System (IndoBioSys). IndoBioSys is a German-Indonesian research project that is conducted by the Museum für Naturkunde in Berlin and the Zoologische Staatssammlung München, in close cooperation with the Research Center for Biology – Indonesian Institute of Sciences (RCB-LIPI, Bogor). PMID:29134041
Pathways for Off-site Corporate PV Procurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny S
Through July 2017, corporate customers contracted for more than 2,300 MW of utility-scale solar. This paper examines the benefits, challenges, and outlooks for large-scale off-site solar purchasing through four pathways: power purchase agreements, retail choice, utility partnerships (green tariffs and bilateral contracts with utilities), and by becoming a licensed wholesale seller of electricity. Each pathway differs based on where in the United States it is available, the value provided to a corporate off-taker, and the ease of implementation. The paper concludes with a discussion of future pathway comparison, noting that to deploy more corporate off-site solar, new procurement pathways aremore » needed.« less
Shetty, Dinesh A.; Frankel, Steven H.
2013-01-01
Summary The physical space version of the stretched vortex subgrid scale model [Phys. Fluids 12, 1810 (2000)] is tested in large eddy simulations (LES) of the turbulent lid driven cubic cavity flow. LES is carried out using a higher order finite-difference method [J. Comput. Phys. 229, 8802 (2010)]. The effects of different vortex orientation models and subgrid turbulence spectrums are assessed through comparisons of the LES predictions against direct numerical simulations (DNS) [Phys. Fluids 12, 1363 (2000)]. Three Reynolds numbers 12000, 18000, and 22000 are studied. Good agreement with the DNS data for the mean and fluctuating quantities is observed. PMID:24187423
Photoionization of the valence shells of the neutral tungsten atom
NASA Astrophysics Data System (ADS)
Ballance, C. P.; McLaughlin, B. M.
2015-04-01
Results from large-scale theoretical cross section calculations for the total photoionization (PI) of the 4f, 5s, 5p and 6s orbitals of the neutral tungsten atom using the Dirac Coulomb R-matrix approximation (DARC: Dirac-atomic R-matrix codes) are presented. Comparisons are made with previous theoretical methods and prior experimental measurements. In previous experiments a time-resolved dual laser approach was employed for the photo-absorption of metal vapours and photo-absorption measurements on tungsten in a solid, using synchrotron radiation. The lowest ground state level of neutral tungsten is 5{{p}6}5{{d}4}6{{s}2}{{ }5}{{D}J}, with J = 0, and requires only a single dipole matrix for PI. To make a meaningful comparison with existing experimental measurements, we statistically average the large-scale theoretical PI cross sections from the levels associated with the ground state 5{{p}6}5{{d}4}6{{s}2}{{ }5}{{D}J} (J = 0, 1, 2, 3, 4) levels and the 5{{d}5}6{{s} 7}{{S}3} excited metastable level. As the experiments have a self-evident metastable component in their ground state measurement, averaging over the initial levels allows for a more consistent and realistic comparison to be made. In the wider context, the absence of many detailed electron-impact excitation (EIE) experiments for tungsten and its multi-charged ion stages allows current PI measurements and theory to provide a road-map for future EIE, ionization and di-electronic cross section calculations by identifying the dominant resonance structure and features across an energy range of hundreds of eV.
Automated UMLS-Based Comparison of Medical Forms
Dugas, Martin; Fritz, Fleur; Krumm, Rainer; Breil, Bernhard
2013-01-01
Medical forms are very heterogeneous: on a European scale there are thousands of data items in several hundred different systems. To enable data exchange for clinical care and research purposes there is a need to develop interoperable documentation systems with harmonized forms for data capture. A prerequisite in this harmonization process is comparison of forms. So far – to our knowledge – an automated method for comparison of medical forms is not available. A form contains a list of data items with corresponding medical concepts. An automatic comparison needs data types, item names and especially item with these unique concept codes from medical terminologies. The scope of the proposed method is a comparison of these items by comparing their concept codes (coded in UMLS). Each data item is represented by item name, concept code and value domain. Two items are called identical, if item name, concept code and value domain are the same. Two items are called matching, if only concept code and value domain are the same. Two items are called similar, if their concept codes are the same, but the value domains are different. Based on these definitions an open-source implementation for automated comparison of medical forms in ODM format with UMLS-based semantic annotations was developed. It is available as package compareODM from http://cran.r-project.org. To evaluate this method, it was applied to a set of 7 real medical forms with 285 data items from a large public ODM repository with forms for different medical purposes (research, quality management, routine care). Comparison results were visualized with grid images and dendrograms. Automated comparison of semantically annotated medical forms is feasible. Dendrograms allow a view on clustered similar forms. The approach is scalable for a large set of real medical forms. PMID:23861827
Accurate population genetic measurements require cryptic species identification in corals
NASA Astrophysics Data System (ADS)
Sheets, Elizabeth A.; Warner, Patricia A.; Palumbi, Stephen R.
2018-06-01
Correct identification of closely related species is important for reliable measures of gene flow. Incorrectly lumping individuals of different species together has been shown to over- or underestimate population differentiation, but examples highlighting when these different results are observed in empirical datasets are rare. Using 199 single nucleotide polymorphisms, we assigned 768 individuals in the Acropora hyacinthus and A. cytherea morphospecies complexes to each of eight previously identified cryptic genetic species and measured intraspecific genetic differentiation across three geographic scales (within reefs, among reefs within an archipelago, and among Pacific archipelagos). We then compared these calculations to estimated genetic differentiation at each scale with all cryptic genetic species mixed as if we could not tell them apart. At the reef scale, correct genetic species identification yielded lower F ST estimates and fewer significant comparisons than when species were mixed, raising estimates of short-scale gene flow. In contrast, correct genetic species identification at large spatial scales yielded higher F ST measurements than mixed-species comparisons, lowering estimates of long-term gene flow among archipelagos. A meta-analysis of published population genetic studies in corals found similar results: F ST estimates at small spatial scales were lower and significance was found less often in studies that controlled for cryptic species. Our results and these prior datasets controlling for cryptic species suggest that genetic differentiation among local reefs may be lower than what has generally been reported in the literature. Not properly controlling for cryptic species structure can bias population genetic analyses in different directions across spatial scales, and this has important implications for conservation strategies that rely on these estimates.
Experience in using commercial clouds in CMS
NASA Astrophysics Data System (ADS)
Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration
2017-10-01
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
Cummins, Steven; Petticrew, Mark; Higgins, Cassie; Findlay, Anne; Sparks, Leigh
2005-12-01
To assess the effect on fruit and vegetable consumption, self reported, and psychological health of a "natural experiment"-the introduction of large scale food retailing in a deprived Scottish community. Prospective quasi-experimental design comparing baseline and follow up data in an "intervention" community with a matched "comparison" community in Glasgow, UK. 412 men and women aged 16 or over for whom follow up data on fruit and vegetable consumption and GHQ-12 were available. Fruit and vegetable consumption in portions per day, poor self reported health, and poor psychological health (GHQ-12). Adjusting for age, sex, educational attainment, and employment status there was no population impact on daily fruit and vegetable consumption, self reported, and psychological health. There was some evidence for a net reduction in the prevalence of poor psychological health for residents who directly engaged with the intervention. Government policy has advocated using large scale food retailing as a social intervention to improve diet and health in poor communities. In contrast with a previous uncontrolled study this study did not find evidence for a net intervention effect on fruit and vegetable consumption, although there was evidence for an improvement in psychological health for those who directly engaged with the intervention. Although definitive conclusions about the effect of large scale retailing on diet and health in deprived communities cannot be drawn from non-randomised controlled study designs, evaluations of the impacts of natural experiments may offer the best opportunity to generate evidence about the health impacts of retail interventions in poor communities.
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Experience in using commercial clouds in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauerdick, L.; Bockelman, B.; Dykstra, D.
Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less
Summer circulation in the Mexican tropical Pacific
NASA Astrophysics Data System (ADS)
Trasviña, A.; Barton, E. D.
2008-05-01
The main components of large-scale circulation of the eastern tropical Pacific were identified in the mid 20th century, but the details of the circulation at length scales of 10 2 km or less, the mesoscale field, are less well known particularly during summer. The winter circulation is characterized by large mesoscale eddies generated by intense cross-shore wind pulses. These eddies propagate offshore to provide an important source of mesoscale variability for the eastern tropical Pacific. The summer circulation has not commanded similar attention, the main reason being that the frequent generation of hurricanes in the area renders in situ observations difficult. Before the experiment presented here, the large-scale summer circulation of the Gulf of Tehuantepec was thought to be dominated by a poleward flow along the coast. A drifter-deployment experiment carried out in June 2000, supported by satellite altimetry and wind data, was designed to characterize this hypothesized Costa Rica Coastal Current. We present a detailed comparison between altimetry-estimated geostrophic and in situ currents estimated from drifters. Contrary to expectation, no evidence of a coherent poleward coastal flow across the gulf was found. During the 10-week period of observations, we documented a recurrent pattern of circulation within 500 km of shore, forced by a combination of local winds and the regional-scale flow. Instead of the Costa Rica Coastal Current, we found a summer eddy field capable of influencing large areas of the eastern tropical Pacific. Even in summer, the cross-isthmus wind jet is capable of inducing eddy formation.
Large-Scale Diversity of Slope Fishes: Pattern Inconsistency between Multiple Diversity Indices
Gaertner, Jean-Claude; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A.; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3′- 45°7′ N; 5°3′W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness. PMID:23843962
Herbivory drives large-scale spatial variation in reef fish trophic interactions
Longo, Guilherme O; Ferreira, Carlos Eduardo L; Floeter, Sergio R
2014-01-01
Trophic interactions play a critical role in the structure and function of ecosystems. Given the widespread loss of biodiversity due to anthropogenic activities, understanding how trophic interactions respond to natural gradients (e.g., abiotic conditions, species richness) through large-scale comparisons can provide a broader understanding of their importance in changing ecosystems and support informed conservation actions. We explored large-scale variation in reef fish trophic interactions, encompassing tropical and subtropical reefs with different abiotic conditions and trophic structure of reef fish community. Reef fish feeding pressure on the benthos was determined combining bite rates on the substrate and the individual biomass per unit of time and area, using video recordings in three sites between latitudes 17°S and 27°S on the Brazilian Coast. Total feeding pressure decreased 10-fold and the composition of functional groups and species shifted from the northern to the southernmost sites. Both patterns were driven by the decline in the feeding pressure of roving herbivores, particularly scrapers, while the feeding pressure of invertebrate feeders and omnivores remained similar. The differential contribution to the feeding pressure across trophic categories, with roving herbivores being more important in the northernmost and southeastern reefs, determined changes in the intensity and composition of fish feeding pressure on the benthos among sites. It also determined the distribution of trophic interactions across different trophic categories, altering the evenness of interactions. Feeding pressure was more evenly distributed at the southernmost than in the southeastern and northernmost sites, where it was dominated by few herbivores. Species and functional groups that performed higher feeding pressure than predicted by their biomass were identified as critical for their potential to remove benthic biomass. Fishing pressure unlikely drove the large-scale pattern; however, it affected the contribution of some groups on a local scale (e.g., large-bodied parrotfish) highlighting the need to incorporate critical functions into conservation strategies. PMID:25512851
Nakamura, T. K. M.; Hasegawa, H.; Daughton, William Scott; ...
2017-11-17
Magnetic reconnection is believed to be the main driver to transport solar wind into the Earth’s magnetosphere when the magnetopause features a large magnetic shear. However, even when the magnetic shear is too small for spontaneous reconnection, the Kelvin–Helmholtz instability driven by a super-Alfvénic velocity shear is expected to facilitate the transport. Although previous kinetic simulations have demonstrated that the non-linear vortex flows from the Kelvin–Helmholtz instability gives rise to vortex-induced reconnection and resulting plasma transport, the system sizes of these simulations were too small to allow the reconnection to evolve much beyond the electron scale as recently observed bymore » the Magnetospheric Multiscale (MMS) spacecraft. Here in this paper, based on a large-scale kinetic simulation and its comparison with MMS observations, we show for the first time that ion-scale jets from vortex-induced reconnection rapidly decay through self-generated turbulence, leading to a mass transfer rate nearly one order higher than previous expectations for the Kelvin–Helmholtz instability.« less
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives
NASA Astrophysics Data System (ADS)
Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen
2014-03-01
Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, T. K. M.; Hasegawa, H.; Daughton, William Scott
Magnetic reconnection is believed to be the main driver to transport solar wind into the Earth’s magnetosphere when the magnetopause features a large magnetic shear. However, even when the magnetic shear is too small for spontaneous reconnection, the Kelvin–Helmholtz instability driven by a super-Alfvénic velocity shear is expected to facilitate the transport. Although previous kinetic simulations have demonstrated that the non-linear vortex flows from the Kelvin–Helmholtz instability gives rise to vortex-induced reconnection and resulting plasma transport, the system sizes of these simulations were too small to allow the reconnection to evolve much beyond the electron scale as recently observed bymore » the Magnetospheric Multiscale (MMS) spacecraft. Here in this paper, based on a large-scale kinetic simulation and its comparison with MMS observations, we show for the first time that ion-scale jets from vortex-induced reconnection rapidly decay through self-generated turbulence, leading to a mass transfer rate nearly one order higher than previous expectations for the Kelvin–Helmholtz instability.« less
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Bimler, David; Kirkland, John; Pichler, Shaun
2004-02-01
The structure of color perception can be examined by collecting judgments about color dissimilarities. In the procedure used here, stimuli are presented three at a time on a computer monitor and the spontaneous grouping of most-similar stimuli into gestalts provides the dissimilarity comparisons. Analysis with multidimensional scaling allows such judgments to be pooled from a number of observers without obscuring the variations among them. The anomalous perceptions of color-deficient observers produce comparisons that are represented well by a geometric model of compressed individual color spaces, with different forms of deficiency distinguished by different directions of compression. The geometrical model is also capable of accommodating the normal spectrum of variation, so that there is greater variation in compression parameters between tests on normal subjects than in those between repeated tests on individual subjects. The method is sufficiently sensitive and the variations sufficiently large that they are not obscured by the use of a range of monitors, even under somewhat loosely controlled conditions.
QuickEval: a web application for psychometric scaling experiments
NASA Astrophysics Data System (ADS)
Van Ngo, Khai; Storvik, Jehans J.; Dokkeberg, Christopher A.; Farup, Ivar; Pedersen, Marius
2015-01-01
QuickEval is a web application for carrying out psychometric scaling experiments. It offers the possibility of running controlled experiments in a laboratory, or large scale experiment over the web for people all over the world. It is a unique one of a kind web application, and it is a software needed in the image quality field. It is also, to the best of knowledge, the first software that supports the three most common scaling methods; paired comparison, rank order, and category judgement. It is also the first software to support rank order. Hopefully, a side effect of this newly created software is that it will lower the threshold to perform psychometric experiments, improve the quality of the experiments being carried out, make it easier to reproduce experiments, and increase research on image quality both in academia and industry. The web application is available at www.colourlab.no/quickeval.
A fracture criterion for widespread cracking in thin-sheet aluminum alloys
NASA Technical Reports Server (NTRS)
Newman, J. C., Jr.; Dawicke, D. S.; Sutton, M. A.; Bigelow, C. A.
1993-01-01
An elastic-plastic finite-element analysis was used with a critical crack-tip-opening angle (CTOA) fracture criterion to model stable crack growth in thin-sheet 2024-T3 aluminum alloy panels with single and multiple-site damage (MSD) cracks. Comparisons were made between critical angles determined from the analyses and those measured with photographic methods. Calculated load against crack extension and load against crack-tip displacement on single crack specimens agreed well with test data even for large-scale plastic deformations. The analyses were also able to predict the stable tearing behavior of large lead cracks in the presence of stably tearing MSD cracks. Small MSD cracks significantly reduced the residual strength for large lead cracks.
NASA Technical Reports Server (NTRS)
Meszaros, S. P.
1985-01-01
Visual, scaled comparisons are made among prominent volcanic, tectonic, crater and impact basin features photographed on various planets and moons in the solar system. The volcanic formation Olympus Mons, on Mars, is 27 km tall, while Io volcanic plumes reach 200-300 km altitude. Valles Marineris, a tectonic fault on Mars, is several thousand kilometers long, and the Ithasa Chasma on the Saturnian moon Tethys extends two-thirds the circumference of the moon. Craters on the Saturnian moons Tethys and Mimas are large enough to suggest a collision by objects which almost shattered the planetoids. Large meteorite impacts may leave large impact basins or merely ripples, such as found on Callisto, whose icy surface could not support high mountains formed by giant body impacts.
NASA Astrophysics Data System (ADS)
Martin, A. C. H.; Boutin, J.; Hauser, D.; Dinnat, E. P.
2014-08-01
The impact of the ocean surface roughness on the ocean L-band emissivity is investigated using simultaneous airborne measurements from an L-band radiometer (CAROLS) and from a C-band scatterometer (STORM) acquired in the Gulf of Biscay (off-the French Atlantic coasts) in November 2010. Two synergetic approaches are used to investigate the impact of surface roughness on the L-band brightness temperature (Tb). First, wind derived from the scatterometer measurements is used to analyze the roughness contribution to Tb as a function of wind and compare it with the one simulated by SMOS and Aquarius roughness models. Then residuals from this mean relationship are analyzed in terms of mean square slope derived from the STORM instrument. We show improvement of new radiometric roughness models derived from SMOS and Aquarius satellite measurements in comparison with prelaunch models. Influence of wind azimuth on Tb could not be evidenced from our data set. However, we point out the importance of taking into account large roughness scales (>20 cm) in addition to small roughness scale (5 cm) rapidly affected by wind to interpret radiometric measurements far from nadir. This was made possible thanks to simultaneous estimates of large and small roughness scales using STORM at small (7-16°) and large (30°) incidence angles.
Graphene/MoS2 hybrid technology for large-scale two-dimensional electronics.
Yu, Lili; Lee, Yi-Hsien; Ling, Xi; Santos, Elton J G; Shin, Yong Cheol; Lin, Yuxuan; Dubey, Madan; Kaxiras, Efthimios; Kong, Jing; Wang, Han; Palacios, Tomás
2014-06-11
Two-dimensional (2D) materials have generated great interest in the past few years as a new toolbox for electronics. This family of materials includes, among others, metallic graphene, semiconducting transition metal dichalcogenides (such as MoS2), and insulating boron nitride. These materials and their heterostructures offer excellent mechanical flexibility, optical transparency, and favorable transport properties for realizing electronic, sensing, and optical systems on arbitrary surfaces. In this paper, we demonstrate a novel technology for constructing large-scale electronic systems based on graphene/molybdenum disulfide (MoS2) heterostructures grown by chemical vapor deposition. We have fabricated high-performance devices and circuits based on this heterostructure, where MoS2 is used as the transistor channel and graphene as contact electrodes and circuit interconnects. We provide a systematic comparison of the graphene/MoS2 heterojunction contact to more traditional MoS2-metal junctions, as well as a theoretical investigation, using density functional theory, of the origin of the Schottky barrier height. The tunability of the graphene work function with electrostatic doping significantly improves the ohmic contact to MoS2. These high-performance large-scale devices and circuits based on this 2D heterostructure pave the way for practical flexible transparent electronics.
Evaporation estimation of rift valley lakes: comparison of models.
Melesse, Assefa M; Abtew, Wossenu; Dessalegne, Tibebe
2009-01-01
Evapotranspiration (ET) accounts for a substantial amount of the water flux in the arid and semi-arid regions of the World. Accurate estimation of ET has been a challenge for hydrologists, mainly because of the spatiotemporal variability of the environmental and physical parameters governing the latent heat flux. In addition, most available ET models depend on intensive meteorological information for ET estimation. Such data are not available at the desired spatial and temporal scales in less developed and remote parts of the world. This limitation has necessitated the development of simple models that are less data intensive and provide ET estimates with acceptable level of accuracy. Remote sensing approach can also be applied to large areas where meteorological data are not available and field scale data collection is costly, time consuming and difficult. In areas like the Rift Valley regions of Ethiopia, the applicability of the Simple Method (Abtew Method) of lake evaporation estimation and surface energy balance approach using remote sensing was studied. The Simple Method and a remote sensing-based lake evaporation estimates were compared to the Penman, Energy balance, Pan, Radiation and Complementary Relationship Lake Evaporation (CRLE) methods applied in the region. Results indicate a good correspondence of the models outputs to that of the above methods. Comparison of the 1986 and 2000 monthly lake ET from the Landsat images to the Simple and Penman Methods show that the remote sensing and surface energy balance approach is promising for large scale applications to understand the spatial variation of the latent heat flux.
2015-05-01
decisions on the fly in an online retail environment. Tech. rep., Working Paper, Massachusetts Institute of Technology, Boston, MA. Arneson, Broderick , Ryan...Hayward, Philip Henderson. 2009. MoHex wins Hex tournament. International Computer Games Association Journal 32 114–116. Arneson, Broderick , Ryan B...Combina- torial Search. Enzenberger, Markus, Martin Muller, Broderick Arneson, Richard Segal. 2010. Fuego—an open-source framework for board games and
2013-03-01
33 Mario Vanhoucke and Stephan Vandevoorde – “Measuring the Accuracy of Earned Value/Earned Schedule Forecasting Predictors” (2007...technical problem to the present day ‘ super projects’” (Clark and Lorenzoni, 1997: 2). Cost engineering has “application regardless of industry...large construction projects, but also the acceptance of earned schedule principles on an international scale. Mario Vanhoucke and Stephan Vandevoorde
A Comparison of Optically Measured and Radar-Derived Horizontal Neutral Winds
1990-01-01
observations of large-scale gravity waves or3 traveling ionospheric disturbances by Testud [1970], Iunsucker [1982]. The contributions of the parallel...increase in Kp, in agreement with previous findings of excitation by auroral processes [ Testud , 1970; lHernandez and Roble, 1976b; lunsucker, 19821...and 0+ and H+ ions, J. Geophys. Res., 69, 2349-2355, 1964. Testud , J., Gravity waves generated during magnetic substorms, J. Atmos. Terr. Phys., 32
ERIC Educational Resources Information Center
Holweger, Nancy; Taylor, Grace
The fifth-grade and eighth-grade science items on a state performance assessment were compared for differential item functioning (DIF) due to gender. The grade 5 sample consisted of 8,539 females and 8,029 males and the grade 8 sample consisted of 7,477 females and 7,891 males. A total of 30 fifth grade items and 26 eighth grade items were…
ERIC Educational Resources Information Center
Wieber, Adrianne Essex; Evoy, Katie; McLaughlin, T. F.; Derby, K. Mark; Kellogg, Ethyl; Williams, Randy Lee; Peterson, Stephanie Marie; Rinaldi, Lisa
2017-01-01
We designed and implemented a modified eight-week Direct Instruction (DI) program intended to teach a third grade student with learning disabilities to tell time. The first objective was to determine whether or not the appearance (interesting or boring) of the worksheet affected performance. These data suggested the use of large-scale clocks and…
2013-01-01
Based Micropolar Single Crystal Plasticity: Comparison of Multi - and Single Criterion Theories. J. Mech. Phys. Solids 2011, 59, 398–422. ALE3D ...element boundaries in a multi -step constitutive evaluation (Becker, 2011). The results showed the desired effects of smoothing the deformation field...Implementation The model was implemented in the large-scale parallel, explicit finite element code ALE3D (2012). The crystal plasticity
Using Model Comparisons to Understand Sources of Nitrogen Delivered to US Coastal Areas
NASA Astrophysics Data System (ADS)
McCrackin, M. L.; Harrison, J.; Compton, J. E.
2011-12-01
Nitrogen loading to water bodies can result in eutrophication-related hypoxia and degraded water quality. The relative contributions of different anthropogenic and natural sources of in-stream N cannot be directly measured at whole-watershed scales; hence, N source attribution estimates at scales beyond a small catchment must rely on models. Although such estimates have been accomplished using individual N loading models, there has not yet been a comparison of source attribution by multiple regional- and continental-scale models. We compared results from two models applied at large spatial scales: Nutrient Export from WatershedS (NEWS) and SPAtially Referenced Regressions On Watersheds (SPARROW). Despite widely divergent approaches to source attribution, NEWS and SPARROW identified the same dominant sources of N for 65% of the modeled drainage area of the continental US. Human activities accounted for over two-thirds of N delivered to the coastal zone. Regionally, the single largest sources of N predicted by both models reflect land-use patterns across the country. Sewage was an important source in densely populated regions along the east and west coasts of the US. Fertilizer and livestock manure were dominant in the Mississippi River Basin, where the bulk of agricultural areas are located. Run-off from undeveloped areas was the largest source of N delivered to coastal areas in the northwestern US. Our analysis shows that comparisons of source apportionment between models can increase confidence in modeled output by revealing areas of agreement and disagreement. We found predictions for agriculture and atmospheric deposition to be comparable between models; however, attribution to sewage was greater by SPARROW than by NEWS, while the reverse was true for natural N sources. Such differences in predictions resulted from differences in model structure and sources of input data. Nonetheless, model comparisons provide strong evidence that anthropogenic activities have a profound effect on N delivered to coastal areas of the US, especially along the Atlantic coast and Gulf of Mexico.
Tracing Galactic Outflows to the Source: Spatially Resolved Feedback in M83 with COS
NASA Astrophysics Data System (ADS)
Aloisi, Alessandra
2016-10-01
Star-formation (SF) feedback plays a vital role in shaping galaxy properties, but there are many open questions about how this feedback is created, propagated, and felt by galaxies. SF-driven feedback can be observationally constrained with rest-frame UV absorption-line spectroscopy that accesses a range of powerful gas density and kinematic diagnostics. Studies at both high and low redshift show clear evidence for large-scale outflows in star-forming galaxies that scale with galaxy SF rate. However, by sampling one sightline or the galaxy as a whole, these studies are not tailored to reveal how the large-scale outflows develop from their ultimate sources at the scale of individual SF regions. We propose the first spatially-resolved COS G130M/G160M (1130-1800 A) study of the ISM in the nearby (4.6 Mpc) face-on spiral starburst M83 using individual young star clusters as background sources. This is the first down-the-barrel study where blueshifted absorptions can be identified directly with outflowing gas in a spatially resolved fashion. The kpc-scale flows sampled by the COS pointings will be anchored to the properties of the large-scale (10-100 kpc) flows thanks to the wealth of multi-wavelength observations of M83 from X-ray to radio. A comparison of COS data with mock spectra from constrained simulations of spiral galaxies with FIRE (Feedback In Realistic Environments; a code with unprecedented 1-100 pc spatial resolution and self-consistent treatments of stellar feedback) will provide an important validation of these simulations and will supply the community with a powerful and well-tested tool for galaxy formation predictions applicable to all redshifts.
Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset
NASA Astrophysics Data System (ADS)
Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.
2017-12-01
Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.
Panepinto, Julie A; Torres, Sylvia; Bendo, Cristiane B; McCavit, Timothy L; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W
2014-01-01
Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5-18 years and parent proxy-report for ages 2-18 years. This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77-0.84; parent proxy-report α = 0.90-0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. © 2013 Wiley Periodicals, Inc.
PedsQL™ Multidimensional Fatigue Scale in Sickle Cell Disease: Feasibility, Reliability and Validity
Panepinto, Julie A.; Torres, Sylvia; Bendo, Cristiane B.; McCavit, Timothy L.; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W.
2013-01-01
Background Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5–18 years and parent proxy-report for ages 2–18 years. Procedure This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. Results The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77–0.84; parent proxy-report α = 0.90–0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥ 0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. Conclusions The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. PMID:24038960
Automatic extraction of property norm-like data from large text corpora.
Kelly, Colin; Devereux, Barry; Korhonen, Anna
2014-01-01
Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.
Comparison of manual scaled and predicted foE and foF1 critical frequencies. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamache, R.R.; Kersey, W.T.
1990-07-01
The CCIR and Titheridge foE critical frequency prediction routines were tested by comparison with 1875 manually scaled values. The foF1 critical frequency prediction routine of Millman et al was tested by comparison with 1005 manually scaled values. Plots and statistics of the comparisons are presented and discussed. From the results recommendations are made to help improve autoscaling.
A new large area scintillator screen for X-ray imaging
NASA Astrophysics Data System (ADS)
Nagarkar, V. V.; Miller, S. R.; Tipnis, S. V.; Lempicki, A.; Brecher, C.; Lingertat, H.
2004-01-01
We report on the development of a new, large area, powdered scintillator screen based on Lu 2O 3(Eu). As reported earlier, the transparent ceramic form of this material has a very high density of 9.4 g/cm 3, a high light output comparable to that of CsI(Tl), and emits in a narrow spectral band centered at about 610 nm. Research into fabrication of this ceramic scintillator in a large area format is currently underway, however the process is not yet practical for large scale production. Here we have explored fabrication of large area screens using precursor powders from which the ceramics are fabricated. To date we have produced up to 16 × 16 cm 2 area screens with thickness in the range of 18 mg/cm 2. This paper outlines the screen fabrication technique and presents its imaging performance in comparison with a commercial Gd 2O 2S:Tb (GOS) screen.
Vectorial finite elements for solving the radiative transfer equation
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.
2018-06-01
The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.
NDE application of ultrasonic tomography to a full-scale concrete structure.
Choi, Hajin; Popovics, John S
2015-06-01
Newly developed ultrasonic imaging technology for large concrete elements, based on tomographic reconstruction, is presented. The developed 3-D internal images (velocity tomograms) are used to detect internal defects (polystyrene foam and pre-cracked concrete prisms) that represent structural damage within a large steel reinforced concrete element. A hybrid air-coupled/contact transducer system is deployed. Electrostatic air-coupled transducers are used to generate ultrasonic energy and contact accelerometers are attached on the opposing side of the concrete element to detect the ultrasonic pulses. The developed hybrid testing setup enables collection of a large amount of high-quality, through-thickness ultrasonic data without surface preparation to the concrete. The algebraic reconstruction technique is used to reconstruct p-wave velocity tomograms from the obtained time signal data. A comparison with a one-sided ultrasonic imaging method is presented for the same specimen. Through-thickness tomography shows some benefit over one-sided imaging for highly reinforced concrete elements. The results demonstrate that the proposed through-thickness ultrasonic technique shows great potential for evaluation of full-scale concrete structures in the field.
NASA Astrophysics Data System (ADS)
Ravi, Sujith; Sharratt, Brenton S.; Li, Junran; Olshevski, Stuart; Meng, Zhongju; Zhang, Jianguo
2016-10-01
Novel carbon sequestration strategies such as large-scale land application of biochar may provide sustainable pathways to increase the terrestrial storage of carbon. Biochar has a long residence time in the soil and hence comprehensive studies are urgently needed to quantify the environmental impacts of large-scale biochar application. In particular, black carbon emissions from soils amended with biochar may counteract the negative emission potential due to the impacts on air quality, climate, and biogeochemical cycles. We investigated, using wind tunnel experiments, the particulate matter emission potential of a sand and two agriculturally important soils amended with different concentrations of biochar, in comparison to control soils. Our results indicate that biochar application considerably increases particulate emissions possibly by two mechanisms-the accelerated emission of fine biochar particles and the generation and emission of fine biochar particles resulting from abrasion of large biochar particles by sand grains. Our study highlights the importance of considering the background soil properties (e.g., texture) and geomorphological processes (e.g., aeolian transport) for biochar-based carbon sequestration programs.
NASA Astrophysics Data System (ADS)
Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.
Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code \\citep{schartmann_Klahr_99}, we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments open out into a dense and very turbulent disk structure. In a post-processing step, we calculate observable quantities like spectral energy distributions or images with the help of the 3D radiative transfer code MC3D \\citep{schartmann_Wolf_03}. Good agreement is found in comparisons with data due to the existence of almost dust-free lines of sight through the large scale component and the large column densities caused by the dense disk.
NASA Astrophysics Data System (ADS)
Steinberger, Bernhard; Conrad, Clinton
2017-04-01
Two large seismically slow lower mantle regions beneath the Pacific and Africa are sometimes referred to as "superplumes". This names evokes associations of large-scale active upwellings, however it is not clear whether these are real, or rather just regular mantle plumes occur more frequently in these regions. Here we study the implications of new results on dynamic topography, which would be associated with active upwellings, on this question. Recently, Hoggard et al. (2016) developed a detailed model of marine residual topography, after subtracting isostatic crustal topography. Combining this with results from continents, a global model can be expanded in spherical harmonics. Comparison with dynamic topography derived from mantle flow models inferred from seismic tomography (Steinberger, 2016) yields overall good agreement and similar power spectra, except at spherical harmonic degree two where mantle flow models predict about six times as much power as is inferred from observations: Mantle flow models feature two large-scale antipodal upwellings at the seismically slow regions, whereas the actual topography gives only little indication of these. We will discuss here what this discrepancy could possibly mean and how it could be resolved.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Walker, Kevin P.
1991-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
NASA Technical Reports Server (NTRS)
Groesbeck, D. E.; Huff, R. G.; Vonglahn, U. H.
1977-01-01
Small-scale circular, noncircular, single- and multi-element nozzles with flow areas as large as 122 sq cm were tested with cold airflow at exit Mach numbers from 0.28 to 1.15. The effects of multi-element nozzle shape and element spacing on jet Mach number decay were studied in an effort to reduce the noise caused by jet impingement on externally blown flap (EBF) STOL aircraft. The jet Mach number decay data are well represented by empirical relations. Jet spreading and Mach number decay contours are presented for all configurations tested.
NASA Technical Reports Server (NTRS)
Chulya, A.; Walker, K. P.
1989-01-01
A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.
Final report on RMO Vickers key comparison COOMET M.H-K1
NASA Astrophysics Data System (ADS)
Aslanyan, E.; Menelao, F.; Herrmann, K.; Aslanyan, A.; Pivovarov, V.; Galat, E.; Dovzhenko, Y.; Zhamanbalin, M.
2013-01-01
This report describes a COOMET key comparison on Vickers hardness scales involving five National Metrology Institutes: PTB (Germany), BelGIM (Belarus), NSC IM (Ukraine), KazInMetr (Kazakhstan) and VNIIFTRI (Russia). The pilot laboratory was VNIIFTRI, and PTB acted as the linking institute to key comparisons CCM.H-K1.b and CCM.H-K1.c conducted for the Vickers hardness scales HV1 and HV30, respectively. The comparison was also conducted for the HV5 Vickers hardness scale, since this scale is most frequently used in practice in Russia and CIS countries that work according to GOST standards. In the key comparison, two sets of hardness reference blocks for the Vickers hardness scales HV1, HV5 and HV30 consisting each of three hardness reference blocks with hardness levels of 450 HV and 750 HV were used. The measurement results and uncertainty assessments for HV1 and HV30 hardness scales, as announced by BelGIM, NSC IM, KazInMetr and VNIIFTRI, are in good agreement with the key comparison reference values of CCM.H-K1.b and CCM.H-K1.c. The comparison results for the HV5 hardness scale are viewed as additional information, since up to today no CCM key comparisons on this scale have yet been carried out. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Detecting similarities among distant homologous proteins by comparison of domain flexibilities.
Pandini, Alessandro; Mauri, Giancarlo; Bordogna, Annalisa; Bonati, Laura
2007-06-01
Aim of this work is to assess the informativeness of protein dynamics in the detection of similarities among distant homologous proteins. To this end, an approach to perform large-scale comparisons of protein domain flexibilities is proposed. CONCOORD is confirmed as a reliable method for fast conformational sampling. The root mean square fluctuation of alpha carbon positions in the essential dynamics subspace is employed as a measure of local flexibility and a synthetic index of similarity is presented. The dynamics of a large collection of protein domains from ASTRAL/SCOP40 is analyzed and the possibility to identify relationships, at both the family and the superfamily levels, on the basis of the dynamical features is discussed. The obtained picture is in agreement with the SCOP classification, and furthermore suggests the presence of a distinguishable familiar trend in the flexibility profiles. The results support the complementarity of the dynamical and the structural information, suggesting that information from dynamics analysis can arise from functional similarities, often partially hidden by a static comparison. On the basis of this first test, flexibility annotation can be expected to help in automatically detecting functional similarities otherwise unrecoverable.
Comparison of Aero-Propulsive Performance Predictions for Distributed Propulsion Configurations
NASA Technical Reports Server (NTRS)
Borer, Nicholas K.; Derlaga, Joseph M.; Deere, Karen A.; Carter, Melissa B.; Viken, Sally A.; Patterson, Michael D.; Litherland, Brandon L.; Stoll, Alex M.
2017-01-01
NASA's X-57 "Maxwell" flight demonstrator incorporates distributed electric propulsion technologies in a design that will achieve a significant reduction in energy used in cruise flight. A substantial portion of these energy savings come from beneficial aerodynamic-propulsion interaction. Previous research has shown the benefits of particular instantiations of distributed propulsion, such as the use of wingtip-mounted cruise propellers and leading edge high-lift propellers. However, these benefits have not been reduced to a generalized design or analysis approach suitable for large-scale design exploration. This paper discusses the rapid, "design-order" toolchains developed to investigate the large, complex tradespace of candidate geometries for the X-57. Due to the lack of an appropriate, rigorous set of validation data, the results of these tools were compared to three different computational flow solvers for selected wing and propulsion geometries. The comparisons were conducted using a common input geometry, but otherwise different input grids and, when appropriate, different flow assumptions to bound the comparisons. The results of these studies showed that the X-57 distributed propulsion wing should be able to meet the as-designed performance in cruise flight, while also meeting or exceeding targets for high-lift generation in low-speed flight.
Comparison and validation of gridded precipitation datasets for Spain
NASA Astrophysics Data System (ADS)
Quintana-Seguí, Pere; Turco, Marco; Míguez-Macho, Gonzalo
2016-04-01
In this study, two gridded precipitation datasets are compared and validated in Spain: the recently developed SAFRAN dataset and the Spain02 dataset. These are validated using rain gauges and they are also compared to the low resolution ERA-Interim reanalysis. The SAFRAN precipitation dataset has been recently produced, using the SAFRAN meteorological analysis, which is extensively used in France (Durand et al. 1993, 1999; Quintana-Seguí et al. 2008; Vidal et al., 2010) and which has recently been applied to Spain (Quintana-Seguí et al., 2015). SAFRAN uses an optimal interpolation (OI) algorithm and uses all available rain gauges from the Spanish State Meteorological Agency (Agencia Estatal de Meteorología, AEMET). The product has a spatial resolution of 5 km and it spans from September 1979 to August 2014. This dataset has been produced mainly to be used in large scale hydrological applications. Spain02 (Herrera et al. 2012, 2015) is another high quality precipitation dataset for Spain based on a dense network of quality-controlled stations and it has different versions at different resolutions. In this study we used the version with a resolution of 0.11°. The product spans from 1971 to 2010. Spain02 is well tested and widely used, mainly, but not exclusively, for RCM model validation and statistical downscliang. ERA-Interim is a well known global reanalysis with a spatial resolution of ˜79 km. It has been included in the comparison because it is a widely used product for continental and global scale studies and also in smaller scale studies in data poor countries. Thus, its comparison with higher resolution products of a data rich country, such as Spain, allows us to quantify the errors made when using such datasets for national scale studies, in line with some of the objectives of the EU-FP7 eartH2Observe project. The comparison shows that SAFRAN and Spain02 perform similarly, even though their underlying principles are different. Both products are largely better than ERA-Interim, which has a much coarser representation of the relief, which is crucial for precipitation. These results are a contribution to the Spanish Case Study of the eartH2Observe project, which is focused on the simulation of drought processes in Spain using Land-Surface Models (LSM). This study will also be helpful in the Spanish MARCO project, which aims at improving the ability of RCMs to simulate hydrometeorological extremes.
NASA Astrophysics Data System (ADS)
Ebert, A.; Herwegh, M.; Karl, R.; Edwin, G.; Decrouez, D.
2007-12-01
In the upper crust, shear zones are widespread and appear at different scales. Although deformation conditions, shear zone history, and displacements vary in time and space between shear zones and also within them, in all shear zones similar trends in the evolution of large- to micro-scale fabrics can be observed. The microstructural analyses of calcite mylonites from Naxos and various Helvetic nappes show that microstructures from different metamorphic zones vary considerably on the outcrop- and even on the sample- scale. However, grain sizes tend to increase with metamorphic degree in case of Naxos and the Helvetic nappes. Although deformation conditions (e.g. deformation temperature, strain rate, and shear zone geometry, i.e. shear zone width and rock type above/below thrust) vary between the different tectonic settings, microstructural trends (e.g. grain size) correlate with each other. This is in contrast to many previous studies, where no corrections for second phase contents have been applied. In an Arrhenius-type diagram, the grain growth trends of calcite of all studied shear zones fit on a single trend, independent of the dimensions of localized large-scale structures, which is in the dm to m- and km-range in case of the Helvetic thrusts and the marble suite of Naxos, respectively. The calcite grain size increases continuously from few μm to >2mm with a temperature increase from <300°C to >700°C. In a field geologist's point of view, this is an important observation because it shows that natural dynamically stabilized steady state microfabrics can be used to estimate temperature conditions during deformation, although the tectonic settings are different (e.g. strain rate, fluid flow). The reason for this agreement might be related to a scale-dependence of the shear zone dimensions, where the widths increase with increasing metamorphic conditions. In this sense, the deformation volumes affected by localization must closely be linked to the strength of the affected rocks. In comparison to experiments, similar microstructural trends are observed. Here, however, shifts of these trends occur due to the higher strain rates.
Estimation of pairwise sequence similarity of mammalian enhancers with word neighbourhood counts.
Göke, Jonathan; Schulz, Marcel H; Lasserre, Julia; Vingron, Martin
2012-03-01
The identity of cells and tissues is to a large degree governed by transcriptional regulation. A major part is accomplished by the combinatorial binding of transcription factors at regulatory sequences, such as enhancers. Even though binding of transcription factors is sequence-specific, estimating the sequence similarity of two functionally similar enhancers is very difficult. However, a similarity measure for regulatory sequences is crucial to detect and understand functional similarities between two enhancers and will facilitate large-scale analyses like clustering, prediction and classification of genome-wide datasets. We present the standardized alignment-free sequence similarity measure N2, a flexible framework that is defined for word neighbourhoods. We explore the usefulness of adding reverse complement words as well as words including mismatches into the neighbourhood. On simulated enhancer sequences as well as functional enhancers in mouse development, N2 is shown to outperform previous alignment-free measures. N2 is flexible, faster than competing methods and less susceptible to single sequence noise and the occurrence of repetitive sequences. Experiments on the mouse enhancers reveal that enhancers active in different tissues can be separated by pairwise comparison using N2. N2 represents an improvement over previous alignment-free similarity measures without compromising speed, which makes it a good candidate for large-scale sequence comparison of regulatory sequences. The software is part of the open-source C++ library SeqAn (www.seqan.de) and a compiled version can be downloaded at http://www.seqan.de/projects/alf.html. Supplementary data are available at Bioinformatics online.
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
Statistical Measures of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard
1993-12-01
\\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.
Danis, Ildiko; Scheuring, Noemi; Papp, Eszter; Czinner, Antal
2012-06-01
A new instrument for assessing depressive mood, the first version of Depression Scale Questionnaire (DS1K) was published in 2008 by Halmai et al. This scale was used in our large sample study, in the framework of the For Healthy Offspring project, involving parents of young children. The original questionnaire was developed in small samples, so our aim was to assist further development of the instrument by the psychometric analysis of the data in our large sample (n=1164). The DS1K scale was chosen to measure the parents' mood and mental state in the For Healthy Offspring project. The questionnaire was completed by 1063 mothers and 328 fathers, yielding a heterogenous sample with respect to age and socio-demographic status. Analyses included main descriptive statistics, establishing the scales' inner consistency and some comparisons. Results were checked in our original and multiple imputed datasets as well. According to our results the reliability of our scale was much worse than in the original study (Cronbach alpha: 0.61 versus 0.88). During the detailed item-analysis it became clear that two items contributed to the observed decreased coherence. We assumed a problem related to misreading in case of one of these items. This assumption was checked by cross-analysis by the assumed reading level. According to our results the reliability of the scale was increased in both the lower and higher education level groups if we did not include one or both of these problematic items. However, as the number of items decreased, the relative sensitivity of the scale was also reduced, with fewer persons categorized in the risk group compared to the original scale. We suggest for the authors as an alternative solution to redefine the problematic items and retest the reliability of the measurement in a sample with diverse socio-demographic characteristics.
Large ejecta fragments from asteroids. [Abstract only
NASA Technical Reports Server (NTRS)
Asphaug, E.
1994-01-01
The asteroid 4 Vesta, with its unique basaltic crust, remains a key mystery of planetary evolution. A localized olivine feature suggests excavation of subcrustal material in a crater or impact basin comparable in size to the planetary radius (R(sub vesta) is approximately = 280 km). Furthermore, a 'clan' of small asteroids associated with Vesta (by spectral and orbital similarities) may be ejecta from this impact 151 and direct parents of the basaltic achondrites. To escape, these smaller (about 4-7 km) asteroids had to be ejected at speeds greater than the escape velocity, v(sub esc) is approximately = 350 m/s. This evidence that large fragments were ejected at high speed from Vesta has not been reconciled with the present understanding of impact physics. Analytical spallation models predict that an impactor capable of ejecting these 'chips off Vesta' would be almost the size of Vesta! Such an impact would lead to the catastrophic disruption of both bodies. A simpler analysis is outlined, based on comparison with cratering on Mars, and it is shown that Vesta could survive an impact capable of ejecting kilometer-scale fragments at sufficient speed. To what extent does Vesta survive the formation of such a large crater? This is best addressed using a hydrocode such as SALE 2D with centroidal gravity to predict velocities subsequent to impact. The fragmentation outcome and velocity subsequent to the impact described to demonstrate that Vesta survives without large-scale disassembly or overturning of the crust. Vesta and its clan represent a valuable dataset for testing fragmentation hydrocodes such as SALE 2D and SPH 3D at planetary scales. Resolution required to directly model spallation 'chips' on a body 100 times as large is now marginally possible on modern workstations. These boundaries are important in near-surface ejection processes and in large-scale disruption leading to asteroid families and stripped cores.
NASA Astrophysics Data System (ADS)
Endo, S.; Lin, W.; Jackson, R. C.; Collis, S. M.; Vogelmann, A. M.; Wang, D.; Oue, M.; Kollias, P.
2017-12-01
Tropical convection is one of the main drivers of the climate system and recognized as a major source of uncertainty in climate models. High-resolution modeling is performed with a focus on the deep convection cases during the active monsoon period of the TWP-ICE field campaign to explore ways to improve the fidelity of convection permitting tropical simulations. Cloud resolving model (CRM) simulations are performed with WRF modified to apply flexible configurations for LES/CRM simulations. We have enhanced the capability of the forcing module to test different implementations of large-scale vertical advective forcing, including a function for optional use of large-scale thermodynamic profiles and a function for the condensate advection. The baseline 3D CRM configurations are, following Fridlind et al. (2012), driven by observationally-constrained ARM forcing and tested with diagnosed surface fluxes and fixed sea-surface temperature and prescribed aerosol size distributions. After the spin-up period, the simulations follow the observed precipitation peaks associated with the passages of precipitation systems. Preliminary analysis shows that the simulation is generally not sensitive to the treatment of the large-scale vertical advection of heat and moisture, while more noticeable changes in the peak precipitation rate are produced when thermodynamic profiles above the boundary layer were nudged to the reference profiles from the forcing dataset. The presentation will explore comparisons with observationally-based metrics associated with convective characteristics and examine the model performance with a focus on model physics, doubly-periodic vs. nested configurations, and different forcing procedures/sources. A radar simulator will be used to understand possible uncertainties in radar-based retrievals of convection properties. Fridlind, A. M., et al. (2012), A comparison of TWP-ICE observational data with cloud-resolving model results, J. Geophys. Res., 117, D05204, doi:10.1029/2011JD016595.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
NASA Astrophysics Data System (ADS)
Subramanian, A. C.; Lavers, D.; Matsueda, M.; Shukla, S.; Cayan, D. R.; Ralph, M.
2017-12-01
Atmospheric rivers (ARs) - elongated plumes of intense moisture transport - are a primary source of hydrological extremes, water resources and impactful weather along the West Coast of North America and Europe. There is strong demand in the water management, societal infrastructure and humanitarian sectors for reliable sub-seasonal forecasts, particularly of extreme events, such as floods and droughts so that actions to mitigate disastrous impacts can be taken with sufficient lead-time. Many recent studies have shown that ARs in the Pacific and the Atlantic are modulated by large-scale modes of climate variability. Leveraging the improved understanding of how these large-scale climate modes modulate the ARs in these two basins, we use the state-of-the-art multi-model forecast systems such as the North American Multi-Model Ensemble (NMME) and the Subseasonal-to-Seasonal (S2S) database to help inform and assess the probabilistic prediction of ARs and related extreme weather events over the North American and European West Coasts. We will present results from evaluating probabilistic forecasts of extreme precipitation and AR activity at the sub-seasonal scale. In particular, results from the comparison of two winters (2015-16 and 2016-17) will be shown, winters which defied canonical El Niño teleconnection patterns over North America and Europe. We further extend this study to analyze probabilistic forecast skill of AR events in these two basins and the variability in forecast skill during certain regimes of large-scale climate modes.
Ionospheric Alfvén resonator and aurora: Modeling of MICA observations
NASA Astrophysics Data System (ADS)
Tulegenov, B.; Streltsov, A. V.
2017-07-01
We present results from a numerical study of small-scale, intense magnetic field-aligned currents observed in the vicinity of the discrete auroral arc by the Magnetosphere-Ionosphere Coupling in the Alfvén Resonator (MICA) sounding rocket launched from Poker Flat, Alaska, on 19 February 2012. The goal of the MICA project was to investigate the hypothesis that such currents can be produced inside the ionospheric Alfvén resonator by the ionospheric feedback instability (IFI) driven by the system of large-scale magnetic field-aligned currents interacting with the ionosphere. The trajectory of the MICA rocket crossed two discrete auroral arcs and detected packages of intense, small-scale currents at the edges of these arcs, in the most favorable location for the development of the ionospheric feedback instability, predicted by the IFI theory. Simulations of the reduced MHD model derived in the dipole magnetic field geometry with realistic background parameters confirm that IFI indeed generates small-scale ULF waves inside the ionospheric Alfvén resonator with frequency, scale size, and amplitude showing a good, quantitative agreement with the observations. The comparison between numerical results and observations was performed by "flying" a virtual MICA rocket through the computational domain, and this comparison shows that, for example, the waves generated in the numerical model have frequencies in the range from 0.30 to 0.45 Hz, and the waves detected by the MICA rocket have frequencies in the range from 0.18 to 0.50 Hz.
LES with and without explicit filtering: comparison and assessment of various models
NASA Astrophysics Data System (ADS)
Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele
2000-11-01
The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.
NASA Astrophysics Data System (ADS)
Baumgartner, Peter O.
A database on Middle Jurassic-Early Cretaceous radiolarians consisting of first and final occurrences of 110 species in 226 samples from 43 localities was used to compute Unitary Associations and probabilistic ranking and scaling (RASC), in order to test deterministic versus probabilistic quantitative biostratigraphic methods. Because the Mesozoic radiolarian fossil record is mainly dissolution-controlled, the sequence of events differs greatly from section to section. The scatter of local first and final appearances along a time scale is large compared to the species range; it is asymmetrical, with a maximum near the ends of the range and it is non-random. Thus, these data do not satisfy the statistical assumptions made in ranking and scaling. Unitary Associations produce maximum ranges of the species relative to each other by stacking cooccurrence data from all sections and therefore compensate for the local dissolution effects. Ranking and scaling, based on the assumption of a normal random distribution of the events, produces average ranges which are for most species much shorter than the maximum UA-ranges. There are, however, a number of species with similar ranges in both solutions. These species are believed to be the most dissolution-resistant and, therefore, the most reliable ones for the definition of biochronozones. The comparison of maximum and average ranges may be a powerful tool to test reliability of species for biochronology. Dissolution-controlled fossil data yield high crossover frequencies and therefore small, statistically insignificant interfossil distances. Scaling has not produced a useful sequence for this type of data.
Studying effects of non-equilibrium radiative transfer via HPC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Daniel
This report presents slides on Ph.D. Research Goals; Local Thermodynamic Equilibrium (LTE) Implications; Calculating an Opacity; Opacity: Pictographic Representation; Opacity: Pictographic Representation; Opacity: Pictographic Representation; Collisional Radiative Modeling; Radiative and Collisional Excitation; Photo and Electron Impact Ionization; Autoionization; The Rate Matrix; Example: Total Photoionization rate; The Rate Coefficients; inlinlte version 1.1; inlinlte: Verification; New capabilities: Rate Matrix – Flexibility; Memory Option Comparison; Improvements over previous DCA solver; Inter- and intra-node load balancing; Load Balance – Full Picture; Load Balance – Full Picture; Load Balance – Internode; Load Balance – Scaling; Description; Performance; xRAGE Simulation; Post-process @ 2hr; Post-process @ 4hr;more » Post-process @ 8hr; Takeaways; Performance for 1 realization; Motivation for QOI; Multigroup Er; Transport and NLTE large effects (1mm, 1keV); Transport large effect, NLTE lesser (1mm, 750eV); Blastwave Diagnostici – Description & Performance; Temperature Comparison; NLTE has effect on dynamics at wall; NLTE has lesser effect in the foam; Global Takeaways; The end.« less
Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain
2015-06-09
Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.
A comparison of wake characteristics of model and prototype buildings in transverse winds
NASA Technical Reports Server (NTRS)
Logan, E., Jr.; Phataraphruk, P.; Chang, J.
1978-01-01
Previously measured mean velocity and turbulence intensity profiles in the wake of a 26.8-m long building 3.2 m high and transverse to the wind direction in an atmospheric boundary layer several hundred meters thick were compared with profiles at corresponding stations downstream of a 1/50-scale model on the floor of a large meteorological wind tunnel in a boundary layer 0.61 m in thickness. The validity of using model wake data to predict full scale data was determined. Preliminary results are presented which indicate that disparities result from differences in relative depth of logarithmic layers, surface roughness, and the proximity of upstream obstacles.
Solar X-ray Astronomy Sounding Rocket Program
NASA Technical Reports Server (NTRS)
Moses, J. Daniel
1989-01-01
Several broad objectives were pursued by the development and flight of the High Resolution Soft X-Ray Imaging Sounding Rocket Payload, followed by the analysis of the resulting data and by comparison with both ground based and space based observations from other investigators. The scientific objectives were: to study the thermal equilibrium of active region loop systems by analyzing the X-ray observations to determine electron temperatures, densities, and pressures; by recording the changes in the large scale coronal structures from the maximum and descending phases of Cycle 21 to the ascending phase of Cycle 22; and to extend the study of small scale coronal structures through the minimum of Cycle 21 with new emphasis on correlative observations.
Hobart, J; Cano, S
2009-02-01
In this monograph we examine the added value of new psychometric methods (Rasch measurement and Item Response Theory) over traditional psychometric approaches by comparing and contrasting their psychometric evaluations of existing sets of rating scale data. We have concentrated on Rasch measurement rather than Item Response Theory because we believe that it is the more advantageous method for health measurement from a conceptual, theoretical and practical perspective. Our intention is to provide an authoritative document that describes the principles of Rasch measurement and the practice of Rasch analysis in a clear, detailed, non-technical form that is accurate and accessible to clinicians and researchers in health measurement. A comparison was undertaken of traditional and new psychometric methods in five large sets of rating scale data: (1) evaluation of the Rivermead Mobility Index (RMI) in data from 666 participants in the Cannabis in Multiple Sclerosis (CAMS) study; (2) evaluation of the Multiple Sclerosis Impact Scale (MSIS-29) in data from 1725 people with multiple sclerosis; (3) evaluation of test-retest reliability of MSIS-29 in data from 150 people with multiple sclerosis; (4) examination of the use of Rasch analysis to equate scales purporting to measure the same health construct in 585 people with multiple sclerosis; and (5) comparison of relative responsiveness of the Barthel Index and Functional Independence Measure in data from 1400 people undergoing neurorehabilitation. Both Rasch measurement and Item Response Theory are conceptually and theoretically superior to traditional psychometric methods. Findings from each of the five studies show that Rasch analysis is empirically superior to traditional psychometric methods for evaluating rating scales, developing rating scales, analysing rating scale data, understanding and measuring stability and change, and understanding the health constructs we seek to quantify. There is considerable added value in using Rasch analysis rather than traditional psychometric methods in health measurement. Future research directions include the need to reproduce our findings in a range of clinical populations, detailed head-to-head comparisons of Rasch analysis and Item Response Theory, and the application of Rasch analysis to clinical practice.
Estimation of Snow Parameters from Dual-Wavelength Airborne Radar
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert; Iguchi, Toshio; Detwiler, Andrew
1997-01-01
Estimation of snow characteristics from airborne radar measurements would complement In-situ measurements. While In-situ data provide more detailed information than radar, they are limited in their space-time sampling. In the absence of significant cloud water contents, dual-wavelength radar data can be used to estimate 2 parameters of a drop size distribution if the snow density is assumed. To estimate, rather than assume, a snow density is difficult, however, and represents a major limitation in the radar retrieval. There are a number of ways that this problem can be investigated: direct comparisons with in-situ measurements, examination of the large scale characteristics of the retrievals and their comparison to cloud model outputs, use of LDR measurements, and comparisons to the theoretical results of Passarelli(1978) and others. In this paper we address the first approach and, in part, the second.
A three-term conjugate gradient method under the strong-Wolfe line search
NASA Astrophysics Data System (ADS)
Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa
2017-08-01
Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
NASA Astrophysics Data System (ADS)
Hardesty, R. Michael; Brewer, W. Alan; Sandberg, Scott P.; Weickmann, Ann M.; Shepson, Paul B.; Cambaliza, Maria; Heimburger, Alexie; Davis, Kenneth J.; Lauvaux, Thomas; Miles, Natasha L.; Sarmiento, Daniel P.; Deng, A. J.; Gaudet, Brian; Karion, Anna; Sweeney, Colm; Whetstone, James
2016-06-01
A compact commercial Doppler lidar has been deployed in Indianapolis for two years to measure wind profiles and mixing layer properties as part of project to improve greenhouse measurements from large area sources. The lidar uses vertical velocity variance and aerosol structure to measure mixing layer depth. Comparisons with aircraft and the NOAA HRDL lidar generally indicate good performance, although sensitivity might be an issue under low aerosol conditions.
2011-02-03
focused upon the tropospheric forcing, for example the role of blocking systems (large-scale, quasi-stationary, high-pressure systems that may steer...disruptions of the stratosphere may in turn perturb the troposphere and even affect surface weather. In early February 2009, London received heavy snowfall...global measurements from twelve SSW periods, found cooling in the equatorial lower stratosphere and upper troposphere that is associated with increased
A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video
2011-06-01
orders of magnitude larger than existing datasets such CAVIAR [7]. TRECVID 2008 airport dataset [16] contains 100 hours of video, but, it provides only...entire human figure (e.g., above shoulder), amounting to 500% human to video 2Some statistics are approximate, obtained from the CAVIAR 1st scene and...and diversity in both col- lection sites and viewpoints. In comparison to surveillance datasets such as CAVIAR [7] and TRECVID [16] shown in Fig. 3
Air Mass Origin in the Arctic and its Response to Future Warming
NASA Technical Reports Server (NTRS)
Orbe, Clara; Newman, Paul A.; Waugh, Darryn W.; Holzer, Mark; Oman, Luke; Polvani, Lorenzo M.; Li, Feng
2014-01-01
We present the first climatology of air mass origin in the Arctic in terms of rigorously defined air mass fractions that partition air according to where it last contacted the planetary boundary layer (PBL). Results from a present-day climate integration of the GEOSCCM general circulation model reveal that the Arctic lower troposphere below 700 mb is dominated year round by air whose last PBL contact occurred poleward of 60degN, (Arctic air, or air of Arctic origin). By comparison, approx. 63% of the Arctic troposphere above 700 mb originates in the NH midlatitude PBL, (midlatitude air). Although seasonal changes in the total fraction of midlatitude air are small, there are dramatic changes in where that air last contacted the PBL, especially above 700 mb. Specifically, during winter air in the Arctic originates preferentially over the oceans, approx. 26% in the East Pacific, and approx. 20% in the Atlantic PBL. By comparison, during summer air in the Arctic last contacted the midlatitude PBL primarily over land, overwhelmingly so in Asia (approx. 40 %) and, to a lesser extent, in North America (approx. 24%). Seasonal changes in air-mass origin are interpreted in terms of seasonal variations in the large-scale ventilation of the midlatitude boundary layer and lower troposphere, namely changes in the midlatitude tropospheric jet and associated transient eddies during winter and large scale convective motions over midlatitudes during summer.
NASA Astrophysics Data System (ADS)
Priebe, Elizabeth H.; Neville, C. J.; Rudolph, D. L.
2018-03-01
The spatial coverage of hydraulic conductivity ( K) values for large-scale groundwater investigations is often poor because of the high costs associated with hydraulic testing and the large areas under investigation. Domestic water wells are ubiquitous and their well logs represent an untapped resource of information that includes mandatory specific-capacity tests, from which K can be estimated. These specific-capacity tests are routinely conducted at such low pumping rates that well losses are normally insignificant. In this study, a simple and practical approach to augmenting high-quality K values with reconnaissance-level K values from water-well specific-capacity tests is assessed. The integration of lesser quality K values from specific-capacity tests with a high-quality K data set is assessed through comparisons at two different scales: study-area-wide (a 600-km2 area in Ontario, Canada) and in a single geological formation within a portion of the broader study area (200 km2). Results of the comparisons demonstrate that reconnaissance-level K estimates from specific-capacity tests approximate the ranges and distributions of the high-quality K values. Sufficient detail about the physical basis and assumptions that are invoked in the development of the approach are presented here so that it can be applied with confidence by practitioners seeking to enhance their spatial coverage of K values with specific-capacity tests.
NASA Astrophysics Data System (ADS)
Matthes, J. H.; Dietze, M.; Fox, A. M.; Goring, S. J.; McLachlan, J. S.; Moore, D. J.; Poulter, B.; Quaife, T. L.; Schaefer, K. M.; Steinkamp, J.; Williams, J. W.
2014-12-01
Interactions between ecological systems and the atmosphere are the result of dynamic processes with system memories that persist from seconds to centuries. Adequately capturing long-term biosphere-atmosphere exchange within earth system models (ESMs) requires an accurate representation of changes in plant functional types (PFTs) through time and space, particularly at timescales associated with ecological succession. However, most model parameterization and development has occurred using datasets than span less than a decade. We tested the ability of ESMs to capture the ecological dynamics observed in paleoecological and historical data spanning the last millennium. Focusing on an area from the Upper Midwest to New England, we examined differences in the magnitude and spatial pattern of PFT distributions and ecotones between historic datasets and the CMIP5 inter-comparison project's large-scale ESMs. We then conducted a 1000-year model inter-comparison using six state-of-the-art biosphere models at sites that bridged regional temperature and precipitation gradients. The distribution of ecosystem characteristics in modeled climate space reveals widely disparate relationships between modeled climate and vegetation that led to large differences in long-term biosphere-atmosphere fluxes for this region. Model simulations revealed that both the interaction between climate and vegetation and the representation of ecosystem dynamics within models were important controls on biosphere-atmosphere exchange.
Pérez-Rodríguez, Gael; Glez-Peña, Daniel; Azevedo, Nuno F; Pereira, Maria Olívia; Fdez-Riverola, Florentino; Lourenço, Anália
2015-03-01
Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Safaei Pirooz, Amir A.; Flay, Richard G. J.
2018-03-01
We evaluate the accuracy of the speed-up provided in several wind-loading standards by comparison with wind-tunnel measurements and numerical predictions, which are carried out at a nominal scale of 1:500 and full-scale, respectively. Airflow over two- and three-dimensional bell-shaped hills is numerically modelled using the Reynolds-averaged Navier-Stokes method with a pressure-driven atmospheric boundary layer and three different turbulence models. Investigated in detail are the effects of grid size on the speed-up and flow separation, as well as the resulting uncertainties in the numerical simulations. Good agreement is obtained between the numerical prediction of speed-up, as well as the wake region size and location, with that according to large-eddy simulations and the wind-tunnel results. The numerical results demonstrate the ability to predict the airflow over a hill with good accuracy with considerably less computational time than for large-eddy simulation. Numerical simulations for a three-dimensional hill show that the speed-up and the wake region decrease significantly when compared with the flow over two-dimensional hills due to the secondary flow around three-dimensional hills. Different hill slopes and shapes are simulated numerically to investigate the effect of hill profile on the speed-up. In comparison with more peaked hill crests, flat-topped hills have a lower speed-up at the crest up to heights of about half the hill height, for which none of the standards gives entirely satisfactory values of speed-up. Overall, the latest versions of the National Building Code of Canada and the Australian and New Zealand Standard give the best predictions of wind speed over isolated hills.
NASA Astrophysics Data System (ADS)
Khani, Sina; Porté-Agel, Fernando
2017-12-01
The performance of the modulated-gradient subgrid-scale (SGS) model is investigated using large-eddy simulation (LES) of the neutral atmospheric boundary layer within the weather research and forecasting model. Since the model includes a finite-difference scheme for spatial derivatives, the discretization errors may affect the simulation results. We focus here on understanding the effects of finite-difference schemes on the momentum balance and the mean velocity distribution, and the requirement (or not) of the ad hoc canopy model. We find that, unlike the Smagorinsky and turbulent kinetic energy (TKE) models, the calculated mean velocity and vertical shear using the modulated-gradient model, are in good agreement with Monin-Obukhov similarity theory, without the need for an extra near-wall canopy model. The structure of the near-wall turbulent eddies is better resolved using the modulated-gradient model in comparison with the classical Smagorinsky and TKE models, which are too dissipative and yield unrealistic smoothing of the smallest resolved scales. Moreover, the SGS fluxes obtained from the modulated-gradient model are much smaller near the wall in comparison with those obtained from the regular Smagorinsky and TKE models. The apparent inability of the LES model in reproducing the mean streamwise component of the momentum balance using the total (resolved plus SGS) stress near the surface is probably due to the effect of the discretization errors, which can be calculated a posteriori using the Taylor-series expansion of the resolved velocity field. Overall, we demonstrate that the modulated-gradient model is less dissipative and yields more accurate results in comparison with the classical Smagorinsky model, with similar computational costs.
Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies
Papanikolaou, Panagiotis N.; Christidi, Georgia D.; Ioannidis, John P.A.
2006-01-01
Background Information on major harms of medical interventions comes primarily from epidemiologic studies performed after licensing and marketing. Comparison with data from large-scale randomized trials is occasionally feasible. We compared evidence from randomized trials with that from epidemiologic studies to determine whether they give different estimates of risk for important harms of medical interventions. Methods We targeted well-defined, specific harms of various medical interventions for which data were already available from large-scale randomized trials (> 4000 subjects). Nonrandomized studies involving at least 4000 subjects addressing these same harms were retrieved through a search of MEDLINE. We compared the relative risks and absolute risk differences for specific harms in the randomized and nonrandomized studies. Results Eligible nonrandomized studies were found for 15 harms for which data were available from randomized trials addressing the same harms. Comparisons of relative risks between the study types were feasible for 13 of the 15 topics, and of absolute risk differences for 8 topics. The estimated increase in relative risk differed more than 2-fold between the randomized and nonrandomized studies for 7 (54%) of the 13 topics; the estimated increase in absolute risk differed more than 2-fold for 5 (62%) of the 8 topics. There was no clear predilection for randomized or nonrandomized studies to estimate greater relative risks, but usually (75% [6/8]) the randomized trials estimated larger absolute excess risks of harm than the nonrandomized studies did. Interpretation Nonrandomized studies are often conservative in estimating absolute risks of harms. It would be useful to compare and scrutinize the evidence on harms obtained from both randomized and nonrandomized studies. PMID:16505459
Olszewski, John; Winona, Linda; Oshima, Kevin H
2005-04-01
The use of ultrafiltration as a concentration method to recover viruses from environmental waters was investigated. Two ultrafiltration systems (hollow fiber and tangential flow) in a large- (100 L) and small-scale (2 L) configuration were able to recover greater than 50% of multiple viruses (bacteriophage PP7 and T1 and poliovirus type 2) from varying water turbidities (10-157 nephelometric turbidity units (NTU)) simultaneously. Mean recoveries (n = 3) in ground and surface water by the large-scale hollow fiber ultrafiltration system (100 L) were comparable to recoveries observed in the small-scale system (2 L). Recovery of seeded viruses in highly turbid waters from small-scale tangential flow (2 L) (screen and open channel) and hollow fiber ultrafilters (2 L) (small pilot) were greater than 70%. Clogging occurred in the hollow fiber pencil module and when particulate concentrations exceeded 1.6 g/L and 5.5 g/L (dry mass) in the screen and open channel filters, respectively. The small pilot module was able to filter all concentrates without clogging. The small pilot hollow fiber ultrafilter was used to test recovery of seeded viruses from surface waters from different geographical regions in 10-L volumes. Recoveries >70% were observed from all locations.
NASA Astrophysics Data System (ADS)
Cancio, Antonio C.; Redd, Jeremy J.
2017-03-01
The scaling of neutral atoms to large Z, combining periodicity with a gradual trend to homogeneity, is a fundamental probe of density functional theory, one that has driven recent advances in understanding both the kinetic and exchange-correlation energies. Although research focus is normally upon the scaling of integrated energies, insights can also be gained from energy densities. We visualise the scaling of the positive-definite kinetic energy density (KED) in closed-shell atoms, in comparison to invariant quantities based upon the gradient and Laplacian of the density. We notice a striking fit of the KED within the core of any atom to a gradient expansion using both the gradient and the Laplacian, appearing as an asymptotic limit around which the KED oscillates. The gradient expansion is qualitatively different from that derived from first principles for a slowly varying electron gas and is correlated with a nonzero Pauli contribution to the KED near the nucleus. We propose and explore orbital-free meta-GGA models for the kinetic energy to describe these features, with some success, but the effects of quantum oscillations in the inner shells of atoms make a complete parametrisation difficult. We discuss implications for improved orbital-free description of molecular properties.
Planck data versus large scale structure: Methods to quantify discordance
NASA Astrophysics Data System (ADS)
Charnock, Tom; Battye, Richard A.; Moss, Adam
2017-06-01
Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
NASA Astrophysics Data System (ADS)
Wang, Y.; Wei, F.; Feng, X.
2013-12-01
Recent observations revealed a scale-invariant dissipation process in the fast ambient solar wind, while numerical simulations indicated that the dissipation process in collisionless reconnection was multifractal. Here, we investigate the properties of turbulent fluctuations in the magnetic reconnection prevailed region. It is found that there are large magnetic field shear angle and obvious intermittent structures in these regions. The deduced scaling exponents in the dissipation subrange show a multifractal scaling. In comparison, in the nearby region where magnetic reconnection is less prevailed, we find smaller magnetic field shear angle, less intermittent structures, and most importantly, a monofractal dissipation process. These results provide additionally observational evidence for previous observation and simulation work, and they also imply that magnetic dissipation in the solar wind magnetic reconnection might be caused by the intermittent cascade as multifractal processes.
Solar radiation variability over La Réunion island and associated larger-scale dynamics
NASA Astrophysics Data System (ADS)
Mialhe, Pauline; Morel, Béatrice; Pohl, Benjamin; Bessafi, Miloud; Chabriat, Jean-Pierre
2017-04-01
This study aims to examine the solar radiation variability over La Réunion island and its relationship with large-scale circulation. The Satellite Application Facility on Climate Monitoring (CM SAF) produces a Shortwave Incoming Solar radiation (SIS) data record called Solar surfAce RAdiation Heliosat - East (SARAH-E). A comparison to in situ observations from Météo-France measurements networks quantifies the skill of SARAH-E grids which we use as dataset. First step of the work, irradiance mean cycles are calculated to describe the diurnal-seasonal SIS behaviour over La Réunion island. By analogy with the climate anomalies, instantaneous deviations are computed after removal of the mean states. Finally, we associate these anomalies with larger-scale atmospheric dynamics into the South West Indian Ocean by applying multivariate clustering analyses (Hierarchical Ascending Classification, k-means).
Numerical Investigation of Dual-Mode Scramjet Combustor with Large Upstream Interaction
NASA Technical Reports Server (NTRS)
Mohieldin, T. O.; Tiwari, S. N.; Reubush, David E. (Technical Monitor)
2004-01-01
Dual-mode scramjet combustor configuration with significant upstream interaction is investigated numerically, The possibility of scaling the domain to accelerate the convergence and reduce the computational time is explored. The supersonic combustor configuration was selected to provide an understanding of key features of upstream interaction and to identify physical and numerical issues relating to modeling of dual-mode configurations. The numerical analysis was performed with vitiated air at freestream Math number of 2.5 using hydrogen as the sonic injectant. Results are presented for two-dimensional models and a three-dimensional jet-to-jet symmetric geometry. Comparisons are made with experimental results. Two-dimensional and three-dimensional results show substantial oblique shock train reaching upstream of the fuel injectors. Flow characteristics slow numerical convergence, while the upstream interaction slowly increases with further iterations. As the flow field develops, the symmetric assumption breaks down. A large separation zone develops and extends further upstream of the step. This asymmetric flow structure is not seen in the experimental data. Results obtained using a sub-scale domain (both two-dimensional and three-dimensional) qualitatively recover the flow physics obtained from full-scale simulations. All results show that numerical modeling using a scaled geometry provides good agreement with full-scale numerical results and experimental results for this configuration. This study supports the argument that numerical scaling is useful in simulating dual-mode scramjet combustor flowfields and could provide an excellent convergence acceleration technique for dual-mode simulations.
Characterization of double continuum formulations of transport through pore-scale information
NASA Astrophysics Data System (ADS)
Porta, G.; Ceriotti, G.; Bijeljic, B.
2016-12-01
Information on pore-scale characteristics is becoming increasingly available at unprecedented levels of detail from modern visualization/data-acquisition techniques. These advancements are not completely matched by corresponding developments of operational procedures according to which we can engineer theoretical findings aiming at improving our ability to reduce the uncertainty associated with the outputs of continuum-scale models to be employed at large scales. We present here a modeling approach which rests on pore-scale information to achieve a complete characterization of a double continuum model of transport and fluid-fluid reactive processes. Our model makes full use of pore-scale velocity distributions to identify mobile and immobile regions. We do so on the basis of a pointwise (in the pore space) evaluation of the relative strength of advection and diffusion time scales, as rendered by spatially variable values of local Péclet numbers. After mobile and immobile regions are demarcated, we build a simplified unit cell which is employed as a representative proxy of the real porous domain. This model geometry is then employed to simplify the computation of the effective parameters embedded in the double continuum transport model, while retaining relevant information from the pore-scale characterization of the geometry and velocity field. We document results which illustrate the applicability of the methodology to predict transport of a passive tracer within two- and three-dimensional media upon comparison with direct pore-scale numerical simulation of transport in the same geometrical settings. We also show preliminary results about the extension of this model to fluid-fluid reactive transport processes. In this context, we focus on results obtained in two-dimensional porous systems. We discuss the impact of critical quantities required as input to our modeling approach to obtain continuum-scale outputs. We identify the key limitations of the proposed methodology and discuss its capability also in comparison with alternative approaches grounded, e.g., on nonlocal and particle-based approximations.
NASA Astrophysics Data System (ADS)
Gorokhovski, Mikhael; Zamansky, Rémi
2018-03-01
Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.
Genetic drift at expanding frontiers promotes gene segregation
Hallatschek, Oskar; Hersen, Pascal; Ramanathan, Sharad; Nelson, David R.
2007-01-01
Competition between random genetic drift and natural selection play a central role in evolution: Whereas nonbeneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well mixed populations of two fluorescently labeled strains of Escherichia coli develop well defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions. PMID:18056799
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.
2008-12-01
NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System
NASA Astrophysics Data System (ADS)
Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.
2009-04-01
NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.
Hele-Shaw scaling properties of low-contrast Saffman-Taylor flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiFrancesco, M. W.; Maher, J. V.
1989-07-01
We have measured variations of Saffman-Taylor flows by changingdimensionless surface tension /ital B/ alone and by changing /ital B/ inconjunction with changes in dimensionless viscosity contrast /ital A/. Ourlow-aspect-ratio cell permits close study of the linear- and earlynonlinear-flow regimes. Our critical binary-liquid sample allows study of verylow values of /ital A/. The predictions of linear stability analysis work wellfor predicting which length scales are important, but discrepancies areobserved for growth rates. We observe an empirical scaling law for growth ofthe Fourier modes of the patterns in the linear regime. The observed frontpropagation velocity for side-wall disturbances is constantly 2+-1in dimensionlessmore » units, a value consistent with the predictions of Langer andof van Saarloos. Patterns in both the linear and nonlinear regimes collapseimpressively under the scaling suggested by the Hele-Shaw equations. Violationsof scaling due to wetting phenomena are not evident here, presumably becausethe wetting properties of the two phases of the critical binary liquid are sosimilar; thus direct comparison with large-scale Hele-Shaw simulations shouldbe meaningful.« less
Spectral saliency via automatic adaptive amplitude spectrum analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan
2016-03-01
Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.
NASA Astrophysics Data System (ADS)
Knippling, K.; Nava, O.; Emmons, D. J., II; Dao, E. V.
2017-12-01
Geolocation techniques are used to track the source of uncooperative high frequency emitters. Traveling ionospheric disturbances (TIDs) make geolocation particularly difficult due to large perturbations in the local ionospheric electron density profiles. Angle of arrival(AoA) and ionosonde virtual height measurements collected at White Sands Missile Range, New Mexico in January, 2014 are analyzed during a medium scale TID (MSTID). MSTID characteristics are extracted from the measurements, and a comparison between the data sets is performed, providing a measure of the correlation as a function of distance between the ionosonde and AoA circuit midpoints. The results of this study may advance real-time geolocation techniques through the implementation of a time varying mirror model height.
An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics
NASA Technical Reports Server (NTRS)
Baluja, Shumeet
1995-01-01
This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.
Precision Photometry and Astrometry from Pan-STARRS
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Pan-STARRS Team
2018-01-01
The Pan-STARRS 3pi Survey has been calibrated with excellent precision for both astrometry and photometry. The Pan-STARRS Data Release 1, opened to the public on 2016 Dec 16, provides photometry in 5 well-calibrated, well-defined bandpasses (grizy) astrometrically registered to the Gaia frame. Comparisons with other surveys illustrate the high quality of the calibration and provide tests of remaining systematic errors in both Pan-STARRS and those external surveys. With photometry and astrometry of roughly 3 billion astronomical objects, the Pan-STARRS DR1 has substantial overlap with Gaia, SDSS, 2MASS and other surveys. I will discuss the astrometric tie between Pan-STARRS DR1 and Gaia and show comparisons between Pan-STARRS and other large-scale surveys.
Pleistocene Lake Bonneville as an analog for extraterrestrial lakes and oceans: Chapter 21
Chan, M.A.; Jewell, P.; Parker, T.J.; Ormo, J.; Okubo, Chris; Komatsu, G.
2016-01-01
Geomorphic confirmation for a putative ancient Mars ocean relies on analog comparisons of coastal-like features such as shoreline feature attributes and temporal scales of process formation. Pleistocene Lake Bonneville is one of the few large, geologically young, terrestrial lake systems that exemplify well-preserved shoreline characteristics that formed quickly, on the order of a thousand years or less. Studies of Lake Bonneville provide two essential analog considerations for interpreting shorelines on Mars: (1) morphological variations in expression depend on constructional vs erosional processes, and (2) shorelines are not always correlative at an equipotential elevation across a basin due to isostasy, heat flow, wave setup, fetch, and other factors. Although other large terrestrial lake systems display supporting evidence for geomorphic comparisons, Lake Bonneville encompasses the most integrated examples of preserved coastal features related to basin history, sediment supply, climate, and fetch, all within the context of a detailed hydrograph. These collective terrestrial lessons provide a framework to evaluate possible boundary conditions for ancient Mars hydrology and large water body environmental feedbacks. This knowledge of shoreline characteristics, processes, and environments can support explorations of habitable environments and guide future mission explorations.
Butler, J B; Vaillancourt, R E; Potts, B M; Lee, D J; King, G J; Baten, A; Shepherd, M; Freeman, J S
2017-05-22
Previous studies suggest genome structure is largely conserved between Eucalyptus species. However, it is unknown if this conservation extends to more divergent eucalypt taxa. We performed comparative genomics between the eucalypt genera Eucalyptus and Corymbia. Our results will facilitate transfer of genomic information between these important taxa and provide further insights into the rate of structural change in tree genomes. We constructed three high density linkage maps for two Corymbia species (Corymbia citriodora subsp. variegata and Corymbia torelliana) which were used to compare genome structure between both species and Eucalyptus grandis. Genome structure was highly conserved between the Corymbia species. However, the comparison of Corymbia and E. grandis suggests large (from 1-13 MB) intra-chromosomal rearrangements have occurred on seven of the 11 chromosomes. Most rearrangements were supported through comparisons of the three independent Corymbia maps to the E. grandis genome sequence, and to other independently constructed Eucalyptus linkage maps. These are the first large scale chromosomal rearrangements discovered between eucalypts. Nonetheless, in the general context of plants, the genomic structure of the two genera was remarkably conserved; adding to a growing body of evidence that conservation of genome structure is common amongst woody angiosperms.
On the distribution of local dissipation scales in turbulent flows
NASA Astrophysics Data System (ADS)
May, Ian; Morshed, Khandakar; Venayagamoorthy, Karan; Dasi, Lakshmi
2014-11-01
Universality of dissipation scales in turbulence relies on self-similar scaling and large scale independence. We show that the probability density function of dissipation scales, Q (η) , is analytically defined by the two-point correlation function, and the Reynolds number (Re). We also present a new analytical form for the two-point correlation function for the dissipation scales through a generalized definition of a directional Taylor microscale. Comparison of Q (η) predicted within this framework and published DNS data shows excellent agreement. It is shown that for finite Re no single similarity law exists even for the case of homogeneous isotropic turbulence. Instead a family of scaling is presented, defined by Re and a dimensionless local inhomogeneity parameter based on the spatial gradient of the rms velocity. For moderate Re inhomogeneous flows, we note a strong directional dependence of Q (η) dictated by the principal Reynolds stresses. It is shown that the mode of the distribution Q (η) significantly shifts to sub-Kolmogorov scales along the inhomogeneous directions, as in wall bounded turbulence. This work extends the classical Kolmogorov's theory to finite Re homogeneous isotropic turbulence as well as the case of inhomogeneous anisotropic turbulence.
Profitability and sustainability of small - medium scale palm biodiesel plant
NASA Astrophysics Data System (ADS)
Solikhah, Maharani Dewi; Kismanto, Agus; Raksodewanto, Agus; Peryoga, Yoga
2017-06-01
The mandatory of biodiesel application at 20% blending (B20) has been started since January 2016. It creates huge market for biodiesel industry. To build large-scale biodiesel plant (> 100,000 tons/year) is most favorable for biodiesel producers since it can give lower production cost. This cost becomes a challenge for small - medium scale biodiesel plants. However, current biodiesel plants in Indonesia are located mainly in Java and Sumatra, which then distribute biodiesel around Indonesia so that there is an additional cost for transportation from area to area. This factor becomes an opportunity for the small - medium scale biodiesel plants to compete with the large one. This paper discusses the profitability of small - medium scale biodiesel plants conducted on a capacity of 50 tons/day using CPO and its derivatives. The study was conducted by performing economic analysis between scenarios of biodiesel plant that using raw material of stearin, PFAD, and multi feedstock. Comparison on the feasibility of scenarios was also conducted on the effect of transportation cost and selling price. The economic assessment shows that profitability is highly affected by raw material price so that it is important to secure the source of raw materials and consider a multi-feedstock type for small - medium scale biodiesel plants to become a sustainable plant. It was concluded that the small - medium scale biodiesel plants will be profitable and sustainable if they are connected to palm oil mill, have a captive market, and are located minimally 200 km from other biodiesel plants. The use of multi feedstock could increase IRR from 18.68 % to 56.52 %.
Ozawa, Sachiko; Grewal, Simrun; Bridges, John F P
2016-04-01
Community-based health insurance (CBHI) schemes have been introduced in low- and middle-income countries to increase health service utilization and provide financial protection from high healthcare expenditures. We assess the impact of household size on decisions to enroll in CBHI and demonstrate how to correct for group disparity in scale (i.e. variance differences). A discrete choice experiment was conducted across five CBHI attributes. Preferences were elicited through forced-choice paired comparison choice tasks designed based on D-efficiency. Differences in preferences were examined between small (1-4 family members) and large (5-12 members) households using conditional logistic regression. Swait and Louviere test was used to identify and correct for differences in scale. One-hundred and sixty households were surveyed in Northwest Cambodia. Increased insurance premium was associated with disutility [odds ratio (OR) 0.61, p < 0.01], while significant increase in utility was noted for higher hospital fee coverage (OR 10.58, p < 0.01), greater coverage of travel and meal costs (OR 4.08, p < 0.01), and more frequent communication with the insurer (OR 1.33, p < 0.01). While the magnitude of preference for hospital fee coverage appeared larger for the large household group (OR 14.15) compared to the small household group (OR 8.58), differences in scale were observed (p < 0.05). After adjusting for scale (k, ratio of scale between large to small household groups = 1.227, 95 % confidence interval 1.002-1.515), preference differences by household size became negligible. Differences in stated preferences may be due to scale, or variance differences between groups, rather than true variations in preference. Coverage of hospital fees, travel and meal costs are given significant weight in CBHI enrollment decisions regardless of household size. Understanding how community members make decisions about health insurance can inform low- and middle-income countries' paths towards universal health coverage.
Gut Microbiota Dynamics during Dietary Shift in Eastern African Cichlid Fishes
Baldo, Laura; Riera, Joan Lluís; Tooming-Klunderud, Ave; Albà, M. Mar; Salzburger, Walter
2015-01-01
The gut microbiota structure reflects both a host phylogenetic history and a signature of adaptation to the host ecological, mainly trophic niches. African cichlid fishes, with their array of closely related species that underwent a rapid dietary niche radiation, offer a particularly interesting system to explore the relative contribution of these two factors in nature. Here we surveyed the host intra- and interspecific natural variation of the gut microbiota of five cichlid species from the monophyletic tribe Perissodini of lake Tanganyika, whose members transitioned from being zooplanktivorous to feeding primarily on fish scales. The outgroup riverine species Astatotilapia burtoni, largely omnivorous, was also included in the study. Fusobacteria, Firmicutes and Proteobacteria represented the dominant components in the gut microbiota of all 30 specimens analysed according to two distinct 16S rRNA markers. All members of the Perissodini tribe showed a homogenous pattern of microbial alpha and beta diversities, with no significant qualitative differences, despite changes in diet. The recent diet shift between zooplantkon- and scale-eaters simply reflects on a significant enrichment of Clostridium taxa in scale-eaters where they might be involved in the scale metabolism. Comparison with the omnivorous species A. burtoni suggests that, with increased host phylogenetic distance and/or increasing herbivory, the gut microbiota begins differentiating also at qualitative level. The cichlids show presence of a large conserved core of taxa and a small set of core OTUs (average 13–15%), remarkably stable also in captivity, and putatively favoured by both restricted microbial transmission among related hosts (putatively enhanced by mouthbrooding behavior) and common host constraints. This study sets the basis for a future large-scale investigation of the gut microbiota of cichlids and its adaptation in the process of the host adaptive radiation. PMID:25978452
Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett
2016-08-01
The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.
Modeling of Nonlinear Optical Response in Gaseous Media and Its Comparison with Experiment
NASA Astrophysics Data System (ADS)
Xia, Yi
This thesis demonstrates the model and application of nonlinear optical response with Metastable Electronic State Approach (MESA) in ultrashort laser propagation and verifies accuracy of MESA through extensive comparison with experimental data. The MESA is developed from quantum mechanics to describe the nonlinear off-resonant optical response together with strong-field ionization in gaseous medium. The conventional light-matter interaction models are based on a piece-wise approach where Kerr effect and multi-photon ionization are treated as independent nonlinear responses. In contrast, MESA is self-consistent as the response from freed electrons and bound electrons are microscopically linked. It also can be easily coupled to the Unidirectional Pulse Propagation Equations (UPPE) for large scale simulation of experiments. This work tests the implementation of MESA model in simulation of nonlinear phase transients of ultrashort pulse propagation in a gaseous medium. The phase transient has been measured through Single-Shot Supercontinuum Spectral Interferometry. This technique can achieve high temporal resolution (10 fs) and spatial resolution (5 mum). Our comparison between simulation and experiment gives a quantitive test of MESA model including post-adiabatic corrections. This is the first time such a comparison was achieved for a theory suitable for large scale numerical simulation of modern nonlinear-optics experiments. In more than one respect, ours is a first-of-a-kind achievement. In particular, • Large amount of data are compared. We compare the data of nonlinear response induced by different pump intensity in Ar and Nitrogen. The data sets are three dimensions including two transverse spacial dimensions and one axial temporal dimension which reflect the whole structure of nonlinear response including the interplay between Kerr and plasma-induced effects. The resolutions of spatial and temporal dimension are about a few micrometer and several femtosecond. • The regime of light-matter interaction investigated here is between the strong and perturbative, where the pulse intensity can induce nonlinear refractive index change and partial ionization of dielectric medium. Obviously, such regimes are difficult to study both experimentally and theoretically. • MESA is a quantum based model, but it retains the same computation complexity as conventional light-matter interaction model. MESA contains the response from both bound and continuum states in a single self-consistent "Package". So, it is fair to say that this experiment-theory comparison sets a new standard for nonlinear light-matter interaction models and their verification in the area of extreme nonlinear optics.
NASA Astrophysics Data System (ADS)
Draper, Martin; Usera, Gabriel
2015-04-01
The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of neutrally stratified atmospheric boundary layers over heterogeneous terrain". Water Resources Research, 2006, 42, WO1409 (18 p). [4] J. Keissl, M. Parlange, C. Meneveau. "Field experimental study of dynamic Smagorinsky models in the atmospheric surface layer". Journal of the Atmospheric Science, 2004, 61, 2296-2307. [5] E. Bou-Zeid, N. Vercauteren, M.B. Parlange, C. Meneveau. "Scale dependence of subgrid-scale model coefficients: An a priori study". Physics of Fluids, 2008, 20, 115106. [6] G. Kirkil, J. Mirocha, E. Bou-Zeid, F.K. Chow, B. Kosovic, "Implementation and evaluation of dynamic subfilter - scale stress models for large - eddy simulation using WRF". Monthly Weather Review, 2012, 140, 266-284. [7] S. Radhakrishnan, U. Piomelli. "Large-eddy simulation of oscillating boundary layers: model comparison and validation". Journal of Geophysical Research, 2008, 113, C02022. [8] G. Usera, A. Vernet, J.A. Ferré. "A parallel block-structured finite volume method for flows in complex geometry with sliding interfaces". Flow, Turbulence and Combustion, 2008, 81, 471-495. [9] Y-T. Wu, F. Porté-Agel. "Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations". BoundaryLayerMeteorology, 2011, 138, 345-366.
Development of large-scale functional brain networks in children.
Supekar, Kaustubh; Musen, Mark; Menon, Vinod
2009-07-01
The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.
Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.
2002-01-01
Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.
Load Balancing Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pearce, Olga Tkachyshyn
2014-12-01
The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one atmore » the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.« less
Development of Large-Scale Functional Brain Networks in Children
Supekar, Kaustubh; Musen, Mark; Menon, Vinod
2009-01-01
The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7–9 y) and 22 young-adults (ages 19–22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar “small-world” organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism. PMID:19621066
SOLAR SYSTEM MOONS AS ANALOGS FOR COMPACT EXOPLANETARY SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, Stephen R.; Hinkel, Natalie R.; Raymond, Sean N., E-mail: skane@ipac.caltech.edu
2013-11-01
The field of exoplanetary science has experienced a recent surge of new systems that is largely due to the precision photometry provided by the Kepler mission. The latest discoveries have included compact planetary systems in which the orbits of the planets all lie relatively close to the host star, which presents interesting challenges in terms of formation and dynamical evolution. The compact exoplanetary systems are analogous to the moons orbiting the giant planets in our solar system, in terms of their relative sizes and semimajor axes. We present a study that quantifies the scaled sizes and separations of the solarmore » system moons with respect to their hosts. We perform a similar study for a large sample of confirmed Kepler planets in multi-planet systems. We show that a comparison between the two samples leads to a similar correlation between their scaled sizes and separation distributions. The different gradients of the correlations may be indicative of differences in the formation and/or long-term dynamics of moon and planetary systems.« less
Unsteady density and velocity measurements in the 6 foot x 6 foot wind tunnel
NASA Technical Reports Server (NTRS)
Rose, W. C.; Johnson, D. A.
1980-01-01
The methods used and the results obtained in four aero-optic tests are summarized. It is concluded that the rather large values of density fluctuation appear to be the result of much higher Mach number than freestream and the violent turbulence in the flow as it separates from the turret. A representative comparison of fairing on-fairing off rms density fluctuation indicates essentially no effect at M = 0.62 and a small effect at M = 0.95. These data indicate that some slight improvement in optical quality can be expected with the addition of a fairing, although at M = 0.62 its effect would be nil. Fairings are very useful in controlling pressure loads on turrets, but will not have first order effects on optical quality. Scale sizes increase dramatically with increasing azimuth angle for a reprensentative condition. Since both scale sizes and fluctuation levels increase (total turbulence path length also increases) with azimuth angle, substantial optical degradation might be expected. For shorter wave lengths, large degradations occur.
Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence
NASA Astrophysics Data System (ADS)
Johnson, Perry; Meneveau, Charles
2017-11-01
The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
NASA Astrophysics Data System (ADS)
Pochanart, Pakpong; Akimoto, Hajime; Maksyutov, Shamil; Staehelin, Johannes
An innovative and effective method using isentropic trajectory analysis based on the residence time of air masses over the polluted region of Europe was successfully applied to categorize surface ozone amounts at Arosa, Switzerland during 1996-1997. The "European representative" background ozone seasonal cycle at Arosa is associated with long-range transport of North Atlantic air masses, and displays the spring maximum-summer minimum with an annual average of 35 ppb. The photochemical ozone production due to the intense large-scale anthropogenic emission over Europe is estimated as high as 20 ppb in summer, whereas it is insignificant in winter. European sources contribute an annual net ozone production of 9-12 ppb at Arosa. Comparison with the selected regional representative site in Western Europe shows similar results indicating that the categorized ozone data at Arosa by this technique could be regarded as a representative for northern hemispheric mid-latitudes.
NASA Technical Reports Server (NTRS)
Pandey, P. C.
1982-01-01
Eight subsets using two to five frequencies of the SEASAT scanning multichannel microwave radiometer are examined to determine their potential in the retrieval of atmospheric water vapor content. Analysis indicates that the information concerning the 18 and 21 GHz channels are optimum for water vapor retrieval. A comparison with radiosonde observations gave an rms accuracy of approximately 0.40 g sq cm. The rms accuracy of precipitable water using different subsets was within 10 percent. Global maps of precipitable water over oceans using two and five channel retrieval (average of two and five channel retrieval) are given. Study of these maps reveals the possibility of global moisture distribution associated with oceanic currents and large scale general circulation in the atmosphere. A stable feature of the large scale circulation is noticed. The precipitable water is maximum over the Bay of Bengal and in the North Pacific over the Kuroshio current and shows a general latitudinal pattern.
Monte Carlo modelling of large scale NORM sources using MCNP.
Wallace, J D
2013-12-01
The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
The analysis of MAI in large scale MIMO-CDMA system
NASA Astrophysics Data System (ADS)
Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona
2016-12-01
Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.
Ávila, Sérgio P; Cordeiro, Ricardo; Madeira, Patrícia; Silva, Luís; Medeiros, António; Rebelo, Ana C; Melo, Carlos; Neto, Ana I; Haroun, Ricardo; Monteiro, António; Rijsdijk, Kenneth; Johnson, Markes E
2018-01-01
Past climate changes provide important clues for advancement of studies on current global change biology. We have tested large-scale biogeographic patterns through four marine groups from twelve Atlantic Ocean archipelagos and searched for patterns between species richness/endemism and littoral area, age, isolation, latitude and mean annual sea-surface temperatures. Species richness is strongly correlated with littoral area. Two reinforcing effects take place during glacial episodes: i) species richness is expected to decrease (in comparison with interglacial periods) due to the local disappearance of sandy/muddy-associated species; ii) because littoral area is minimal during glacial episodes, area per se induces a decrease on species richness (by extirpation/extinction of marine species) as well as affecting speciation rates. Maximum speciation rates are expected to occur during the interglacial periods, whereas immigration rates are expected to be higher at the LGM. Finally, sea-level changes are a paramount factor influencing marine biodiversity of animals and plants living on oceanic islands. Copyright © 2017 Elsevier Ltd. All rights reserved.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
Large-scale 3D modeling of projectile impact damage in brittle plates
NASA Astrophysics Data System (ADS)
Seagraves, A.; Radovitzky, R.
2015-10-01
The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.
Abdul Wahab, Muhammad Azmi; Fromont, Jane; Gomez, Oliver; Fisher, Rebecca; Jones, Ross
2017-09-15
Changes in turbidity, sedimentation and light over a two year large scale capital dredging program at Onslow, northwestern Australia, were quantified to assess their effects on filter feeder communities, in particular sponges. Community functional morphological composition was quantified using towed video surveys, while dive surveys allowed for assessments of species composition and chlorophyll content. Onslow is relatively diverse recording 150 sponge species. The area was naturally turbid (1.1 mean P 80 NTU), with inshore sites recording 6.5× higher turbidity than offshore localities, likely influenced by the Ashburton River discharge. Turbidity and sedimentation increased by up to 146% and 240% through dredging respectively, with corresponding decreases in light levels. The effects of dredging was variable, and despite existing caveats (i.e. bleaching event and passing of a cyclone), the persistence of sponges and the absence of a pronounced response post-dredging suggest environmental filtering or passive adaptation acquired pre-dredging may have benefited these communities. Copyright © 2017. Published by Elsevier Ltd.
A Comparison of MMPI--2 measures of Psychopathic Deviance in a Forensic Setting
ERIC Educational Resources Information Center
Sellbom, Martin; Ben-Porath, Yossef S.; Stafford, Kathleen P.
2007-01-01
We examined the convergent and discriminant validity of the Minnesota Multiphasic Personality Inventory--2 (MMPI--2) measures of psychopathy, including the Clinical Scale 4, Restructured Clinical Scale 4 (RC4), Content Scale Antisocial Practices (ASP), and Personality Psychopathology Five Scale Disconstraint (DISC). Comparisons of the empirical…
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Foster, Lucas E.
1994-01-01
A small-scale laboratory magnetic suspension system, the Large Angle Magnetic Suspension Test Fixture (LAMSTF) has been constructed at NASA Langley Research Center. This paper first presents some recent developments in the mathematical modelling of the system, particularly in the area of eddy current effects. It is shown that these effects are significant, but may be amenable to modelling and measurement. Next, a theoretical framework is presented, together with a comparison of computed and experimental data. Finally, some control aspects are discussed, together with illustration that the major design objective of LAMSTF - a controlled 360 deg rotation about the vertical axis, has been accomplished.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Dacheng; Department of Aeronautics, Fujian Key Laboratory for Plasma and Magnetic Resonance, School of Physics and Mechanical and Electrical Engineering, Xiamen University, Xiamen, Fujian 361005; Zhao Di
2011-04-18
This letter reports a stable air surface barrier discharge device for large-area sterilization applications at room temperature. This design may result in visually uniform plasmas with the electrode area scaled up (or down) to the required size. A comparison for the survival rates of Escherichia coli from air, N{sub 2} and O{sub 2} surface barrier discharge plasmas is presented, and the air surface plasma consisting of strong filamentary discharges can efficiently kill Escherichia coli. Optical emission measurements indicate that reactive species such as O and OH generated in the room temperature air plasmas play a significant role in the sterilizationmore » process.« less
Large-eddy simulation of a backward facing step flow using a least-squares spectral element method
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Mittal, Rajat
1996-01-01
We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
Schott, Benjamin; Traub, Manuel; Schlagenhauf, Cornelia; Takamiya, Masanari; Antritter, Thomas; Bartschat, Andreas; Löffler, Katharina; Blessing, Denis; Otte, Jens C; Kobitski, Andrei Y; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf; Stegmaier, Johannes
2018-04-01
State-of-the-art light-sheet and confocal microscopes allow recording of entire embryos in 3D and over time (3D+t) for many hours. Fluorescently labeled structures can be segmented and tracked automatically in these terabyte-scale 3D+t images, resulting in thousands of cell migration trajectories that provide detailed insights to large-scale tissue reorganization at the cellular level. Here we present EmbryoMiner, a new interactive open-source framework suitable for in-depth analyses and comparisons of entire embryos, including an extensive set of trajectory features. Starting at the whole-embryo level, the framework can be used to iteratively focus on a region of interest within the embryo, to investigate and test specific trajectory-based hypotheses and to extract quantitative features from the isolated trajectories. Thus, the new framework provides a valuable new way to quantitatively compare corresponding anatomical regions in different embryos that were manually selected based on biological prior knowledge. As a proof of concept, we analyzed 3D+t light-sheet microscopy images of zebrafish embryos, showcasing potential user applications that can be performed using the new framework.
NASA Technical Reports Server (NTRS)
Beacom, John Francis; Dominik, Kurt G.; Melott, Adrian L.; Perkins, Sam P.; Shandarin, Sergei F.
1991-01-01
Results are presented from a series of gravitational clustering simulations in two dimensions. These simulations are a significant departure from previous work, since in two dimensions one can have large dynamic range in both length scale and mass using present computer technology. Controlled experiments were conducted by varying the slope of power-law initial density fluctuation spectra and varying cutoffs at large k, while holding constant the phases of individual Fourier components and the scale of nonlinearity. Filaments are found in many different simulations, even with pure power-law initial conditions. By direct comparison, filaments, called 'second-generation pancakes' are shown to arise as a consequence of mild nonlinearity on scales much larger than the correlation length and are not relics of an initial lattice or due to sparse sampling of the Fourier components. Bumps of low amplitude in the two-point correlation are found to be generic but usually only statistical fluctuations. Power spectra are much easier to relate to initial conditions, and seem to follow a simple triangular shape (on log-log plot) in the nonlinear regime. The rms density fluctuation with Gaussian smoothing is the most stable indicator of nonlinearity.
A comparison of VLSI architecture of finite field multipliers using dual, normal or standard basis
NASA Technical Reports Server (NTRS)
Hsu, I. S.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Reed, I. S.
1987-01-01
Three different finite field multipliers are presented: (1) a dual basis multiplier due to Berlekamp; (2) a Massy-Omura normal basis multiplier; and (3) the Scott-Tavares-Peppard standard basis multiplier. These algorithms are chosen because each has its own distinct features which apply most suitably in different areas. Finally, they are implemented on silicon chips with nitride metal oxide semiconductor technology so that the multiplier most desirable for very large scale integration implementations can readily be ascertained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Ten major studies are included in this report; a separate abstract was prepared for each. In addition, there are 4 appendices related to reliability, namely: (A) Load and Generation Uncertainty, by Norton Savage, DOE; (B) Comparison of Service-Interruption Cost Studies, by Dr. Gay Lamb, DOE; (C) Impact of Large-Scale Fuel-Supply Disruptions on Regional Electric-Power Reliability, by Anthony J. Como and Mark Gielecki, DOE; and (D) Reliability Effects of the 1980 Florida Conservation Act, by William E. Scott and Thomas R. Hitz, Jr., DOE. (LCL)
2013-04-29
during the field campaign. For the first time, it is demonstrated that subseasonal SSS variations in the central Indian Ocean can be monitored by Aquarius...westerlies were observed in both Northern and Southern Hemispheres in the central and eastern Indian Oceans. The anomalous SSH associated with strong...it is demonstrated that subseasonal SSS variations in the central Indian Ocean can be monitored by Aquarius measurements based on the comparison
Cosmological perturbations in the DGP braneworld: Numeric solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Antonio; Koyama, Kazuya; Silva, Fabio P.
2008-04-15
We solve for the behavior of cosmological perturbations in the Dvali-Gabadadze-Porrati (DGP) braneworld model using a new numerical method. Unlike some other approaches in the literature, our method uses no approximations other than linear theory and is valid on large scales. We examine the behavior of late-universe density perturbations for both the self-accelerating and normal branches of DGP cosmology. Our numerical results can form the basis of a detailed comparison between the DGP model and cosmological observations.
Dynamic Simulation of AN Helium Refrigerator
NASA Astrophysics Data System (ADS)
Deschildre, C.; Barraud, A.; Bonnay, P.; Briend, P.; Girard, A.; Poncet, J. M.; Roussel, P.; Sequeira, S. E.
2008-03-01
A dynamic simulation of a large scale existing refrigerator has been performed using the software Aspen Hysys®. The model comprises the typical equipments of a cryogenic system: heat exchangers, expanders, helium phase separators and cold compressors. It represents the 400 W @ 1.8 K Test Facility located at CEA—Grenoble. This paper describes the model development and shows the possibilities and limitations of the dynamic module of Aspen Hysys®. Then, comparison between simulation results and experimental data are presented; the simulation of cooldown process was also performed.
Stability region maximization by decomposition-aggregation method. [Skylab stability
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Cuk, S. M.
1974-01-01
This work is to improve the estimates of the stability regions by formulating and resolving a proper maximization problem. The solution of the problem provides the best estimate of the maximal value of the structural parameter and at the same time yields the optimum comparison system, which can be used to determine the degree of stability of the Skylab. The analysis procedure is completely computerized, resulting in a flexible and powerful tool for stability considerations of large-scale linear as well as nonlinear systems.
Numerical simulation of cloud and precipitation structure during GALE IOP-2
NASA Technical Reports Server (NTRS)
Robertson, F. R.; Perkey, D. J.; Seablom, M. S.
1988-01-01
A regional scale model, LAMPS (Limited Area Mesoscale Prediction System), is used to investigate cloud and precipitation structure that accompanied a short wave system during a portion of GALE IOP-2. A comparison of satellite imagery and model fields indicates that much of the large mesoscale organization of condensation has been captured by the simulation. In addition to reproducing a realistic phasing of two baroclinic zones associated with a split cold front, a reasonable simulation of the gross mesoscale cloud distribution has been achieved.
Impulsive effects of phase-locked pulse pairs on nuclear motion in the electronic ground state
NASA Astrophysics Data System (ADS)
Cina, J. A.; Smith, T. J.
1993-06-01
The nonlinear effects of ultrashort phase-locked electronically resonant pulse pairs on the ground state nuclear motion are investigated theoretically. The pulse-pair propagator, momentum impulse, and displacement are determined in the weak field limit for pulse pairs separated by a time delay short on a nuclear time scale. Possible application to large amplitude vibrational excitation of the 104 cm-1 mode of α-perylene is considered and comparisons are made to other Raman excitation methods.
Physical models of polarization mode dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menyuk, C.R.; Wai, P.K.A.
The effect of randomly varying birefringence on light propagation in optical fibers is studied theoretically in the parameter regime that will be used for long-distance communications. In this regime, the birefringence is large and varies very rapidly in comparison to the nonlinear and dispersive scale lengths. We determine the polarization mode dispersion, and we show that physically realistic models yield the same result for polarization mode dispersion as earlier heuristic models that were introduced by Poole. We also prove an ergodic theorem.
Pierce, B.S.; Eble, C.F.; Stanton, R.W.
1995-01-01
The proximate, petrographic, palynologic, and plant tissue data from two sets of samples indicate a high ash, gelocollinite- and liptinite-rich coal consisting of a relatively diverse paleoflora, including lycopsid trees, small lycopsids, tree ferns, small ferns, pteridosperms, and rare calamites and cordaites. The relatively very high ash yields the relatively thin subunits and the large scale vertical variations in palynomorph floras suggest that the study area was at the edge of the paleopeat-forming environment. -from Authors
Full-scale results for TAM limestone injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, S.
1996-12-31
Information is outlined on the use of thermally active marble (TAM) sorbents in boilers. Data are presented on: the comparison of TAM to limestone; NOVACON process development history; CFB test history; CFB pilot scale test; full-scale CFB trial; August, 1996 CFB demonstration; Foster Wheeler Mount Carmel sorbent feed rate comparison and Ca:S comparison; unburned carbon is ash; and advantages and savings in CFB boilers.
SETTER: web server for RNA structure comparison
Čech, Petr; Svozil, Daniel; Hoksza, David
2012-01-01
The recent discoveries of regulatory non-coding RNAs changed our view of RNA as a simple information transfer molecule. Understanding the architecture and function of active RNA molecules requires methods for comparing and analyzing their 3D structures. While structural alignment of short RNAs is achievable in a reasonable amount of time, large structures represent much bigger challenge. Here, we present the SETTER web server for the RNA structure pairwise comparison utilizing the SETTER (SEcondary sTructure-based TERtiary Structure Similarity Algorithm) algorithm. The SETTER method divides an RNA structure into the set of non-overlapping structural elements called generalized secondary structure units (GSSUs). The SETTER algorithm scales as O(n2) with the size of a GSSUs and as O(n) with the number of GSSUs in the structure. This scaling gives SETTER its high speed as the average size of the GSSU remains constant irrespective of the size of the structure. However, the favorable speed of the algorithm does not compromise its accuracy. The SETTER web server together with the stand-alone implementation of the SETTER algorithm are freely accessible at http://siret.cz/setter. PMID:22693209
Kamarei, Fahimeh; Vajda, Péter; Guiochon, Georges
2013-09-20
This paper compares two methods used for the preparative purification of a mixture of (S)-, and (R)-naproxen on a Whelk-O1 column, using either high performance liquid chromatography or supercritical fluid chromatography. The adsorption properties of both enantiomers were measured by frontal analysis, using methanol-water and methanol-supercritical carbon dioxide mixtures as the mobile phases. The measured adsorption data were modeled, providing the adsorption isotherms and their parameters, which were derived from the nonlinear fit of the isotherm models to the experimental data points. The model used was a Bi-Langmuir isotherm, similar to the model used in many enantiomeric separations. These isotherms were used to calculate the elution profiles of overloaded elution bands, assuming competitive Bi-Langmuir behavior of the two enantiomers. The analysis of these profiles provides the basis for a comparison between supercritical fluid chromatographic and high performance liquid chromatographic preparative scale separations. It permits an illustration of the advantages and disadvantages of these methods and a discussion of their potential performance. Copyright © 2013 Elsevier B.V. All rights reserved.
Uncertainties in ecosystem service maps: a comparison on the European scale.
Schulp, Catharina J E; Burkhard, Benjamin; Maes, Joachim; Van Vliet, Jasper; Verburg, Peter H
2014-01-01
Safeguarding the benefits that ecosystems provide to society is increasingly included as a target in international policies. To support such policies, ecosystem service maps are made. However, there is little attention for the accuracy of these maps. We made a systematic review and quantitative comparison of ecosystem service maps on the European scale to generate insights in the uncertainty of ecosystem service maps and discuss the possibilities for quantitative validation. Maps of climate regulation and recreation were reasonably similar while large uncertainties among maps of erosion protection and flood regulation were observed. Pollination maps had a moderate similarity. Differences among the maps were caused by differences in indicator definition, level of process understanding, mapping aim, data sources and methodology. Absence of suitable observed data on ecosystem services provisioning hampers independent validation of the maps. Consequently, there are, so far, no accurate measures for ecosystem service map quality. Policy makers and other users need to be cautious when applying ecosystem service maps for decision-making. The results illustrate the need for better process understanding and data acquisition to advance ecosystem service mapping, modelling and validation.
NASA Technical Reports Server (NTRS)
Waugh, Darryn W.; Plumb, R. Alan
1994-01-01
We present a trajectory technique, contour advection with surgery (CAS), for tracing the evolution of material contours in a specified (including observed) evolving flow. CAS uses the algorithms developed by Dritschel for contour dynamics/surgery to trace the evolution of specified contours. The contours are represented by a series of particles, which are advected by a specified, gridded, wind distribution. The resolution of the contours is preserved by continually adjusting the number of particles, and finescale features are produced that are not present in the input data (and cannot easily be generated using standard trajectory techniques). The reliability, and dependence on the spatial and temporal resolution of the wind field, of the CAS procedure is examined by comparisons with high-resolution numerical data (from contour dynamics calculations and from a general circulation model), and with routine stratospheric analyses. These comparisons show that the large-scale motions dominate the deformation field and that CAS can accurately reproduce small scales from low-resolution wind fields. The CAS technique therefore enables examination of atmospheric tracer transport at previously unattainable resolution.
Yokoyama, V Y; Miller, G T; Hartsell, P L; Leesch, J G
2000-06-01
In total, 30,491 codling moth, Cydia pomonella (L.), 1-d-old eggs on May Grand nectarines in two large-scale tests, and 17,410 eggs on Royal Giant nectarines in four on-site confirmatory tests were controlled with 100% mortality after fumigation with a methyl bromide quarantine treatment (48 g3 for 2 h at > or = 21 degrees C and 50% volume chamber load) on fruit in shipping containers for export to Japan. Ranges (mean +/- SEM) were for percentage sorption 34.7 +/- 6.2 to 46.5 +/- 2.5, and for concentration multiplied by time products 54.3 +/- 0.9 to 74.5 +/- 0.6 g.h/m3 in all tests. In large-scale tests with May Grand nectarines, inorganic bromide residues 48 h after fumigation ranged from 6.8 +/- 0.7 to 6.9 +/- 0.5 ppm, which were below the U.S. Environmental Protection Agency tolerance of 20 ppm; and, organic bromide residues were < 0.01 ppm after 1 d and < 0.001 ppm after 3 d in storage at 0-1 degree C. After completion of larger-scale and on-site confirmatory test requirements, fumigation of 10 nectarine cultivars in shipping containers for export to Japan was approved in 1995. Comparison of LD50s developed for methyl bromide on 1-d-old codling moth eggs on May Grand and Summer Grand nectarines in 1997 versus those developed for nine cultivars in the previous 11 yr showed no significant differences in codling moth response among the cultivars.
Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, Charles M.; Daily, Jeffrey A.; Vishnu, Abhinav
Machine Learning and Data Mining (MLDM) algorithms are becoming ubiquitous in {\\em model learning} from the large volume of data generated using simulations, experiments and handheld devices. Deep Learning algorithms -- a class of MLDM algorithms -- are applied for automatic feature extraction, and learning non-linear models for unsupervised and supervised algorithms. Naturally, several libraries which support large scale Deep Learning -- such as TensorFlow and Caffe -- have become popular. In this paper, we present novel techniques to accelerate the convergence of Deep Learning algorithms by conducting low overhead removal of redundant neurons -- {\\em apoptosis} of neurons --more » which do not contribute to model learning, during the training phase itself. We provide in-depth theoretical underpinnings of our heuristics (bounding accuracy loss and handling apoptosis of several neuron types), and present the methods to conduct adaptive neuron apoptosis. We implement our proposed heuristics with the recently introduced TensorFlow and using its recently proposed extension with MPI. Our performance evaluation on two difference clusters -- one connected with Intel Haswell multi-core systems, and other with nVIDIA GPUs -- using InfiniBand, indicates the efficacy of the proposed heuristics and implementations. Specifically, we are able to improve the training time for several datasets by 2-3x, while reducing the number of parameters by 30x (4-5x on average) on datasets such as ImageNet classification. For the Higgs Boson dataset, our implementation improves the accuracy (measured by Area Under Curve (AUC)) for classification from 0.88/1 to 0.94/1, while reducing the number of parameters by 3x in comparison to existing literature, while achieving a 2.44x speedup in comparison to the default (no apoptosis) algorithm.« less
The relationship between cavum septum pellucidum and psychopathic traits in a large forensic sample.
Crooks, Dana; Anderson, Nathaniel E; Widdows, Matthew; Petseva, Nia; Koenigs, Michael; Pluto, Charles; Kiehl, Kent A
2018-04-01
Cavum septum pellucidum (CSP) is a neuroanatomical variant of the septum pellucidum that is considered a marker for disrupted brain development. Several small sample studies have reported CSP to be related to disruptive behavior, persistent antisocial traits, and even psychopathy. However, no large-scale samples have comprehensively examined the relationship between CSP, psychopathic traits, and antisocial behavior in forensic samples. Here we test hypotheses about the presence of CSP and its relationship to psychopathic traits in incarcerated males (N = 1432). We also examined the incidence of CSP in two non-incarcerated male control samples for comparison (N = 208 and 125). Ethnic and racial composition was varied with a mean age of 33.1, and an average IQ of 96.96. CSP was evaluated via structural magnetic resonance imaging. CSP was measured by length (number of 1.0 mm slices) in continuous analyses, and classified as absent (0) or present (1+ mm), as well as by size (absent (0), small (1-3), medium (4-5), or large (6+ mm)) for comparison with prior work. The Wechsler Adult Intelligence Scale (WAIS-III), Structured Clinical Interview (SCID-I/P), and Hare Psychopathy Checklist-Revised (PCL-R) were used to assess IQ, substance dependence, and psychopathy, respectively. CSP length was positively associated with PCL-R total, Factor 1 (interpersonal/affective) and Facets 1 (interpersonal) and 2 (affective). CSP was no more prevalent among inmates than among non-incarcerated controls, with similar distributions of size. These results support the hypotheses that abnormal septal/limbic development may contribute to dimensional affective/interpersonal traits of psychopathy, but CSP is not closely associated with antisocial behavior, per se. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Taguchi, Masakazu
2017-09-01
This study compares large-scale dynamical variability in the extratropical stratosphere, such as major stratospheric sudden warmings (MSSWs), among the Japanese 55-year Reanalysis (JRA-55) family data sets. The JRA-55 family consists of three products: a standard product (STDD) of the JRA-55 reanalysis data and two sub-products of JRA-55C (CONV) and JRA-55AMIP (AMIP). CONV assimilates only conventional surface and upper-air observations without assimilation of satellite observations, whereas AMIP runs the same numerical weather prediction model without assimilation of observational data. A comparison of the occurrence of MSSWs in Northern Hemisphere (NH) winter shows that, compared to STDD, CONV delays several MSSWs by 1 to 4 days and also misses a few MSSWs. CONV also misses the Southern Hemisphere (SH) MSSW in September 2002. AMIP shows significantly fewer MSSWs in Northern Hemisphere winter and especially lacks MSSWs of the high aspect ratio of the polar vortex in which the vortex is highly stretched or split. A further examination of daily geopotential height differences between STDD and CONV reveals occasional peaks in both hemispheres that are separated from MSSWs. The delayed and missed MSSW cases have smaller height differences in magnitude than such peaks. The height differences for those MSSWs include large contributions from the zonal component, which reflects underestimations in the weakening of the zonal mean polar night jet in CONV. We also explore strong planetary wave forcings and associated polar vortex weakenings for STDD and AMIP. We find a lower frequency of strong wave forcings and weaker vortex responses to such wave forcings in AMIP, consistent with the lower MSSW frequency.
NASA Astrophysics Data System (ADS)
Skiles, M.
2017-12-01
The ability to accurately measure and manage the natural snow water reservoir in mountainous regions has its challenges, namely mapping of snowpack depth and snow water equivalent (SWE). Presented here is a scalable method that differentially maps snow depth using Structure from Motion (SfM); a photogrammetric technique that uses 2d images to create a 3D model/Digital Surface Model (DSM). There are challenges with applying SfM to snow, namely, relatively uniform snow brightness can make it difficult to produce quality images needed for processing, and vegetation can limit the ability to `see' through the canopy to map both the ground and snow beneath. New techniques implemented in the method to adapt to these challenges will be demonstrated. Results include a time series at (1) the plot scale, imaged with an unmanned areal vehicle (DJI Phantom 2 adapted with Sony A5100) over the Utah Department of Transportation Atwater Study Plot in Little Cottonwood Canyon, UT, and at (2) the mountain watershed scale, imaged from the RGB camera aboard the Airborne Snow Observatory (ASO), over the headwaters of the Uncompahgre River in the San Juan Mountains, CO. At the plot scale we present comparisons to measured snow depth, and at the watershed scale we present comparisons to the ASO lidar DSM. This method is of interest due to its low cost relative to lidar, making it an accessible tool for snow research and the management of water resources. With advancing unmanned aerial vehicle technology there are implications for scalability to map snow depth, and SWE, across large basins.
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.
2018-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013
NASA Technical Reports Server (NTRS)
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean;
2016-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M
2016-04-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
High Pressure Steam Oxidation of Alloys for Advanced Ultra-Supercritical Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holcomb, Gordon R.
A steam oxidation test was conducted at 267 ± 17 bar and 670°C for 293 hr. A comparison test was run at 1 bar. All of the alloys showed an increase in scale thickness and oxidation rate with pressure, and TP304H and IN625 had very large increases. Fine-grained TP304H at 267 bar behaved like a coarse grained alloy, indicative of high pressure increasing the critical Cr level needed to form and maintain a chromia scale. At 267 bar H230, H263, H282, IN617 and IN740 had kp values a factor of one–to-two orders of magnitude higher than at 1 bar. IN625more » had a four order of magnitude increase in kp at 267 bar compared to 1 bar. Possible causes for increased oxidation rates with increased pressure were examined, including increased solid state diffusion within the oxide scale and increased critical Cr content to establish and maintain a chromia scale.« less
Tuncbag, Nurcan; Gursoy, Attila; Nussinov, Ruth; Keskin, Ozlem
2011-08-11
Prediction of protein-protein interactions at the structural level on the proteome scale is important because it allows prediction of protein function, helps drug discovery and takes steps toward genome-wide structural systems biology. We provide a protocol (termed PRISM, protein interactions by structural matching) for large-scale prediction of protein-protein interactions and assembly of protein complex structures. The method consists of two components: rigid-body structural comparisons of target proteins to known template protein-protein interfaces and flexible refinement using a docking energy function. The PRISM rationale follows our observation that globally different protein structures can interact via similar architectural motifs. PRISM predicts binding residues by using structural similarity and evolutionary conservation of putative binding residue 'hot spots'. Ultimately, PRISM could help to construct cellular pathways and functional, proteome-scale annotation. PRISM is implemented in Python and runs in a UNIX environment. The program accepts Protein Data Bank-formatted protein structures and is available at http://prism.ccbb.ku.edu.tr/prism_protocol/.
NASA Astrophysics Data System (ADS)
Gerke, Kirill M.; Vasilyev, Roman V.; Khirevich, Siarhei; Collins, Daniel; Karsanina, Marina V.; Sizonenko, Timofey O.; Korost, Dmitry V.; Lamontagne, Sébastien; Mallants, Dirk
2018-05-01
Permeability is one of the fundamental properties of porous media and is required for large-scale Darcian fluid flow and mass transport models. Whilst permeability can be measured directly at a range of scales, there are increasing opportunities to evaluate permeability from pore-scale fluid flow simulations. We introduce the free software Finite-Difference Method Stokes Solver (FDMSS) that solves Stokes equation using a finite-difference method (FDM) directly on voxelized 3D pore geometries (i.e. without meshing). Based on explicit convergence studies, validation on sphere packings with analytically known permeabilities, and comparison against lattice-Boltzmann and other published FDM studies, we conclude that FDMSS provides a computationally efficient and accurate basis for single-phase pore-scale flow simulations. By implementing an efficient parallelization and code optimization scheme, permeability inferences can now be made from 3D images of up to 109 voxels using modern desktop computers. Case studies demonstrate the broad applicability of the FDMSS software for both natural and artificial porous media.