ERIC Educational Resources Information Center
Oliveri, Maria Elena; von Davier, Matthias
2014-01-01
In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…
Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J
2017-11-01
Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation Outcome Scales scores for attendees of local mental health services compared to other large district health boards. This suggests that patients presented with greater degrees of psychiatric distress, social disruption, behavioural change and impairment as a result of the earthquakes.
ERIC Educational Resources Information Center
Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu
2015-01-01
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
Magnetic fields are observed to exist on all scales in many astrophysical sources such as stars, galaxies, and accretion discs. Understanding the origin of large scale magnetic fields, whereby the field emerges on spatial scales large compared to the fluctuations, has been a particularly long standing challenge. Our physics objective are: 1) what are the minimum ingredients for large-scale dynamo growth? 2) could a large-scale magnetic field grow out of turbulence and sustained despite the presence of dissipation? These questions are fundamental for understanding the large-scale dynamo in both laboratory and astrophysical plasmas. Here, we report major new findings inmore » the area of Large-Scale Dynamo (magnetic field generation).« less
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
Franklin, Jessica M; Rassen, Jeremy A; Bartels, Dorothee B; Schneeweiss, Sebastian
2014-01-01
Nonrandomized safety and effectiveness studies are often initiated immediately after the approval of a new medication, but patients prescribed the new medication during this period may be substantially different from those receiving an existing comparator treatment. Restricting the study to comparable patients after data have been collected is inefficient in prospective studies with primary collection of outcomes. We discuss design and methods for evaluating covariate data to assess the comparability of treatment groups, identify patient subgroups that are not comparable, and decide when to transition to a large-scale comparative study. We demonstrate methods in an example study comparing Cox-2 inhibitors during their postmarketing period (1999-2005) with nonselective nonsteroidal anti-inflammatory drugs (NSAIDs). Graphical checks of propensity score distributions in each treatment group showed substantial problems with overlap in the initial cohorts. In the first half of 1999, >40% of patients were in the region of nonoverlap on the propensity score, and across the study period this fraction never dropped below 10% (the a priori decision threshold for transitioning to the large-scale study). After restricting to patients with no prior NSAID use, <1% of patients were in the region of nonoverlap, indicating that a large-scale study could be initiated in this subgroup and few patients would need to be trimmed from analysis. A sequential study design that uses pilot data to evaluate treatment selection can guide the efficient design of large-scale outcome studies with primary data collection by focusing on comparable patients.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Linking Large-Scale Reading Assessments: Measuring International Trends over 40 Years
ERIC Educational Resources Information Center
Strietholt, Rolf; Rosén, Monica
2016-01-01
Since the start of the new millennium, international comparative large-scale studies have become one of the most well-known areas in the field of education. However, the International Association for the Evaluation of Educational Achievement (IEA) has already been conducting international comparative studies for about half a century. The present…
Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.
DOT National Transportation Integrated Search
2012-06-01
This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...
Method for revealing biases in precision mass measurements
NASA Astrophysics Data System (ADS)
Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.
2013-02-01
A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R
2008-01-01
A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.
Large-scale environments of narrow-line Seyfert 1 galaxies
NASA Astrophysics Data System (ADS)
Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.
2017-09-01
Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Large Scale Metal Additive Techniques Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W
2016-01-01
In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Large-scale influences in near-wall turbulence.
Hutchins, Nicholas; Marusic, Ivan
2007-03-15
Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
Michael D. Ulyshen; James L. Hanula
2009-01-01
Large-scale experimental manipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...
Michael Ulyshen; James Hanula
2009-01-01
Large-scale experimentalmanipulations of deadwood are needed to better understand its importance to animal communities in managed forests. In this experiment, we compared the abundance, species richness, diversity, and composition of arthropods in 9.3-ha plots in which either (1) all coarse woody debris was removed, (2) a large number of logs were added, (3) a large...
Attributes and Behaviors of Performance-Centered Systems.
ERIC Educational Resources Information Center
Gery, Gloria
1995-01-01
Examines attributes, characteristics, and behaviors of performance-centered software packages that are emerging in the consumer software marketplace and compares them with large-scale systems software being designed by internal information systems staffs and vendors of large-scale software designed for financial, manufacturing, processing, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-07-06
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
BactoGeNIE: A large-scale comparative genome visualization for big displays
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...
2015-08-13
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
BactoGeNIE: a large-scale comparative genome visualization for big displays
2015-01-01
Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021
BactoGeNIE: A large-scale comparative genome visualization for big displays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurisano, Jillian; Reda, Khairi; Johnson, Andrew
The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less
Rotation invariant fast features for large-scale recognition
NASA Astrophysics Data System (ADS)
Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd
2012-10-01
We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Participation in International Large-Scale Assessments from a US Perspective
ERIC Educational Resources Information Center
Plisko, Valena White
2013-01-01
International large-scale assessments (ILSAs) play a distinct role in the United States' decentralized federal education system. Separate from national and state assessments, they offer an external, objective measure for the United States to assess student performance comparatively with other countries and over time. The US engagement in ILSAs…
NASA Astrophysics Data System (ADS)
Federico, Ivan; Pinardi, Nadia; Coppini, Giovanni; Oddo, Paolo; Lecci, Rita; Mossa, Michele
2017-01-01
SANIFS (Southern Adriatic Northern Ionian coastal Forecasting System) is a coastal-ocean operational system based on the unstructured grid finite-element three-dimensional hydrodynamic SHYFEM model, providing short-term forecasts. The operational chain is based on a downscaling approach starting from the large-scale system for the entire Mediterranean Basin (MFS, Mediterranean Forecasting System), which provides initial and boundary condition fields to the nested system. The model is configured to provide hydrodynamics and active tracer forecasts both in open ocean and coastal waters of southeastern Italy using a variable horizontal resolution from the open sea (3-4 km) to coastal areas (50-500 m). Given that the coastal fields are driven by a combination of both local (also known as coastal) and deep-ocean forcings propagating along the shelf, the performance of SANIFS was verified both in forecast and simulation mode, first (i) on the large and shelf-coastal scales by comparing with a large-scale survey CTD (conductivity-temperature-depth) in the Gulf of Taranto and then (ii) on the coastal-harbour scale (Mar Grande of Taranto) by comparison with CTD, ADCP (acoustic doppler current profiler) and tide gauge data. Sensitivity tests were performed on initialization conditions (mainly focused on spin-up procedures) and on surface boundary conditions by assessing the reliability of two alternative datasets at different horizontal resolution (12.5 and 6.5 km). The SANIFS forecasts at a lead time of 1 day were compared with the MFS forecasts, highlighting that SANIFS is able to retain the large-scale dynamics of MFS. The large-scale dynamics of MFS are correctly propagated to the shelf-coastal scale, improving the forecast accuracy (+17 % for temperature and +6 % for salinity compared to MFS). Moreover, the added value of SANIFS was assessed on the coastal-harbour scale, which is not covered by the coarse resolution of MFS, where the fields forecasted by SANIFS reproduced the observations well (temperature RMSE equal to 0.11 °C). Furthermore, SANIFS simulations were compared with hourly time series of temperature, sea level and velocity measured on the coastal-harbour scale, showing a good agreement. Simulations in the Gulf of Taranto described a circulation mainly characterized by an anticyclonic gyre with the presence of cyclonic vortexes in shelf-coastal areas. A surface water inflow from the open sea to Mar Grande characterizes the coastal-harbour scale.
Measured acoustic characteristics of ducted supersonic jets at different model scales
NASA Technical Reports Server (NTRS)
Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.
1993-01-01
A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Manifestations of dynamo driven large-scale magnetic field in accretion disks of compact objects
NASA Technical Reports Server (NTRS)
Chagelishvili, G. D.; Chanishvili, R. G.; Lominadze, J. G.; Sokhadze, Z. A.
1991-01-01
A turbulent dynamo nonlinear theory of turbulence was developed that shows that in the compact objects of accretion disks, the generated large-scale magnetic field (when the generation takes place) has a practically toroidal configuration. Its energy density can be much higher than turbulent pulsations energy density, and it becomes comparable with the thermal energy density of the medium. On this basis, the manifestations to which the large-scale magnetic field can lead at the accretion onto black holes and gravimagnetic rotators, respectively, are presented.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrana, Alexandra; Johnson, Matthew C.; Harris, Mary-Jean, E-mail: aterrana@perimeterinstitute.ca, E-mail: mharris8@perimeterinstitute.ca, E-mail: mjohnson@perimeterinstitute.ca
Due to cosmic variance we cannot learn any more about large-scale inhomogeneities from the primary cosmic microwave background (CMB) alone. More information on large scales is essential for resolving large angular scale anomalies in the CMB. Here we consider cross correlating the large-scale kinetic Sunyaev Zel'dovich (kSZ) effect and probes of large-scale structure, a technique known as kSZ tomography. The statistically anisotropic component of the cross correlation encodes the CMB dipole as seen by free electrons throughout the observable Universe, providing information about long wavelength inhomogeneities. We compute the large angular scale power asymmetry, constructing the appropriate transfer functions, andmore » estimate the cosmic variance limited signal to noise for a variety of redshift bin configurations. The signal to noise is significant over a large range of power multipoles and numbers of bins. We present a simple mode counting argument indicating that kSZ tomography can be used to estimate more modes than the primary CMB on comparable scales. A basic forecast indicates that a first detection could be made with next-generation CMB experiments and galaxy surveys. This paper motivates a more systematic investigation of how close to the cosmic variance limit it will be possible to get with future observations.« less
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Astrophysics Data System (ADS)
Tang, Zhanqi; Jiang, Nan
2018-05-01
This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.
ERIC Educational Resources Information Center
Cizek, Gregory J.
2009-01-01
Reliability and validity are two characteristics that must be considered whenever information about student achievement is collected. However, those characteristics--and the methods for evaluating them--differ in large-scale testing and classroom testing contexts. This article presents the distinctions between reliability and validity in the two…
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Subramanian, Kandaswamy
2013-02-01
The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Sharma, Hitt J; Patil, Vishwanath D; Lalwani, Sanjay K; Manglani, Mamta V; Ravichandran, Latha; Kapre, Subhash V; Jadhav, Suresh S; Parekh, Sameer S; Ashtagi, Girija; Malshe, Nandini; Palkar, Sonali; Wade, Minal; Arunprasath, T K; Kumar, Dinesh; Shewale, Sunil D
2012-01-11
Hib vaccine can be easily incorporated in EPI vaccination schedule as the immunization schedule of Hib is similar to that of DTP vaccine. To meet the global demand of Hib vaccine, SIIL scaled up the Hib conjugate manufacturing process. This study was conducted in Indian infants to assess and compare the immunogenicity and safety of DTwP-HB+Hib (Pentavac(®)) vaccine of SIIL manufactured at large scale with the 'same vaccine' manufactured at a smaller scale. 720 infants aged 6-8 weeks were randomized (2:1 ratio) to receive 0.5 ml of Pentavac(®) vaccine from two different lots one produced at scaled up process and the other at a small scale process. Serum samples obtained before and at one month after the 3rd dose of vaccine from both the groups were tested for IgG antibody response by ELISA and compared to assess non-inferiority. Neither immunological interference nor increased reactogenicity was observed in either of the vaccine groups. All infants developed protective antibody titres to diphtheria, tetanus and Hib disease. For hepatitis B antigen, one child from each group remained sero-negative. The response to pertussis was 88% in large scale group vis-à-vis 87% in small scale group. Non-inferiority was concluded for all five components of the vaccine. No serious adverse event was reported in the study. The scale up vaccine achieved comparable response in terms of the safety and immunogenicity to small scale vaccine and therefore can be easily incorporated in the routine childhood vaccination programme. Copyright © 2011 Elsevier Ltd. All rights reserved.
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Fischer, P. D.; Brown, M. E.; Trumbo, S. K.; Hand, K. P.
2017-01-01
We present spatially resolved spectroscopic observations of Europa’s surface at 3-4 μm obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3-4 μm. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μm compared to icy regions. These observations complement previous spectra of large-scale chaos, and can aid efforts to identify the endogenous non-ice species.
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Labat, David; Jourde, Hervé; Lecoq, Nicolas; Mazzilli, Naomi
2017-04-01
The french karst observatory network SNO KARST is a national initiative from the National Institute for Earth Sciences and Astronomy (INSU) of the National Center for Scientific Research (CNRS). It is also part of the new french research infrastructure for the observation of the critical zone OZCAR. SNO KARST is composed by several karst sites distributed over conterminous France which are located in different physiographic and climatic contexts (Mediterranean, Pyrenean, Jura mountain, western and northwestern shore near the Atlantic or the English Channel). This allows the scientific community to develop advanced research and experiments dedicated to improve understanding of the hydrological functioning of karst catchments. Here we used several sites of SNO KARST in order to assess the hydrological response of karst catchments to long-term variation of large-scale atmospheric circulation. Using NCEP reanalysis products and karst discharge, we analyzed the links between large-scale circulation and karst water resources variability. As karst hydrosystems are highly heterogeneous media, they behave differently across different time-scales : we explore the large-scale/local-scale relationships according to time-scales using a wavelet multiresolution approach of both karst hydrological variables and large-scale climate fields such as sea level pressure (SLP). The different wavelet components of karst discharge in response to the corresponding wavelet component of climate fields are either 1) compared to physico-chemical/geochemical responses at karst springs, or 2) interpreted in terms of hydrological functioning by comparing discharge wavelet components to internal components obtained from precipitation/discharge models using the KARSTMOD conceptual modeling platform of SNO KARST.
Successful scaling-up of self-sustained pyrolysis of oil palm biomass under pool-type reactor.
Idris, Juferi; Shirai, Yoshihito; Andou, Yoshito; Mohd Ali, Ahmad Amiruddin; Othman, Mohd Ridzuan; Ibrahim, Izzudin; Yamamoto, Akio; Yasuda, Nobuhiko; Hassan, Mohd Ali
2016-02-01
An appropriate technology for waste utilisation, especially for a large amount of abundant pressed-shredded oil palm empty fruit bunch (OFEFB), is important for the oil palm industry. Self-sustained pyrolysis, whereby oil palm biomass was combusted by itself to provide the heat for pyrolysis without an electrical heater, is more preferable owing to its simplicity, ease of operation and low energy requirement. In this study, biochar production under self-sustained pyrolysis of oil palm biomass in the form of oil palm empty fruit bunch was tested in a 3-t large-scale pool-type reactor. During the pyrolysis process, the biomass was loaded layer by layer when the smoke appeared on the top, to minimise the entrance of oxygen. This method had significantly increased the yield of biochar. In our previous report, we have tested on a 30-kg pilot-scale capacity under self-sustained pyrolysis and found that the higher heating value (HHV) obtained was 22.6-24.7 MJ kg(-1) with a 23.5%-25.0% yield. In this scaled-up study, a 3-t large-scale procedure produced HHV of 22.0-24.3 MJ kg(-1) with a 30%-34% yield based on a wet-weight basis. The maximum self-sustained pyrolysis temperature for the large-scale procedure can reach between 600 °C and 700 °C. We concluded that large-scale biochar production under self-sustained pyrolysis was successfully conducted owing to the comparable biochar produced, compared with medium-scale and other studies with an electrical heating element, making it an appropriate technology for waste utilisation, particularly for the oil palm industry. © The Author(s) 2015.
Coronal hole evolution by sudden large scale changes
NASA Technical Reports Server (NTRS)
Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.
1978-01-01
Sudden shifts in coronal-hole boundaries observed by the S-054 X-ray telescope on Skylab between May and November, 1973, within 1 day of CMP of the holes, at latitudes not exceeding 40 deg, are compared with the long-term evolution of coronal-hole area. It is found that large-scale shifts in boundary locations can account for most if not all of the evolution of coronal holes. The temporal and spatial scales of these large-scale changes imply that they are the results of a physical process occurring in the corona. It is concluded that coronal holes evolve by magnetic-field lines' opening when the holes are growing, and by fields' closing as the holes shrink.
ERIC Educational Resources Information Center
Feuer, Michael J.
2011-01-01
Few arguments about education are as effective at galvanizing public attention and motivating political action as those that compare the performance of students with their counterparts in other countries and that connect academic achievement to economic performance. Because data from international large-scale assessments (ILSA) have a powerful…
Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment
ERIC Educational Resources Information Center
Cui, Ying; Mousavi, Amin
2015-01-01
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
ERIC Educational Resources Information Center
Cresswell, John; Schwantner, Ursula; Waters, Charlotte
2015-01-01
This report reviews the major international and regional large-scale educational assessments, including international surveys, school-based surveys and household-based surveys. The report compares and contrasts the cognitive and contextual data collection instruments and implementation methods used by the different assessments in order to identify…
Prototype Vector Machine for Large Scale Semi-Supervised Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Kwok, James T.; Parvin, Bahram
2009-04-29
Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less
NASA Astrophysics Data System (ADS)
Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.
2015-12-01
The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.
Effects of large-scale wind driven turbulence on sound propagation
NASA Technical Reports Server (NTRS)
Noble, John M.; Bass, Henry E.; Raspet, Richard
1990-01-01
Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Large-Scale Outflows in Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E. J. M.; Baum, S. A.
1995-12-01
\\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.
Evaluation of Large-Scale Public-Sector Reforms: A Comparative Analysis
ERIC Educational Resources Information Center
Breidahl, Karen N.; Gjelstrup, Gunnar; Hansen, Hanne Foss; Hansen, Morten Balle
2017-01-01
Research on the evaluation of large-scale public-sector reforms is rare. This article sets out to fill that gap in the evaluation literature and argues that it is of vital importance since the impact of such reforms is considerable and they change the context in which evaluations of other and more delimited policy areas take place. In our…
ERIC Educational Resources Information Center
Gerstein, Dean R.; Johnson, Robert A.
This report compares the research methods, provider and patient characteristics, and outcome results from four large-scale followup studies of drug treatment during the 1990s: (1) the California Drug and Alcohol Treatment Assessment (CALDATA); (2) Services Research Outcomes Study (SROS); (3) National Treatment Improvement Evaluation Study (NTIES);…
ERIC Educational Resources Information Center
Flanagan, Helen E.; Perry, Adrienne; Freeman, Nancy L.
2012-01-01
File review data were used to explore the impact of a large-scale publicly funded Intensive Behavioral Intervention (IBI) program for young children with autism. Outcomes were compared for 61 children who received IBI and 61 individually matched children from a waitlist comparison group. In addition, predictors of better cognitive outcomes were…
Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards
ERIC Educational Resources Information Center
Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.
2011-01-01
This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…
Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi
2015-01-01
Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.
Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh
2011-01-01
Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...
Numerical Simulations of Homogeneous Turbulence Using Lagrangian-Averaged Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Mohseni, Kamran; Shkoller, Steve; Kosovic, Branko; Marsden, Jerrold E.; Carati, Daniele; Wray, Alan; Rogallo, Robert
2000-01-01
The Lagrangian-averaged Navier-Stokes (LANS) equations are numerically evaluated as a turbulence closure. They are derived from a novel Lagrangian averaging procedure on the space of all volume-preserving maps and can be viewed as a numerical algorithm which removes the energy content from the small scales (smaller than some a priori fixed spatial scale alpha) using a dispersive rather than dissipative mechanism, thus maintaining the crucial features of the large scale flow. We examine the modeling capabilities of the LANS equations for decaying homogeneous turbulence, ascertain their ability to track the energy spectrum of fully resolved direct numerical simulations (DNS), compare the relative energy decay rates, and compare LANS with well-accepted large eddy simulation (LES) models.
NASA Astrophysics Data System (ADS)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.
Downed woody fuel loading dynamics of a large-scale blowdown in northern Minnesota, U.S.A.
C.W. Woodall; L.M. Nagel
2007-01-01
On July 4, 1999, a large-scale blowdown occurred in the BoundaryWaters Canoe AreaWilderness (BWCAW) of northern Minnesota affecting up to 150,000 ha of forest. To further understand the relationship between downed woody fuel loading, stand processes, and disturbance effects, this study compares fuel loadings defined by three strata: (1) blowdown areas of the BWCAW (n...
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Wilcox, J. M.; Svalgaard, L.; Scherrer, P. H.; Mcintosh, P. S.
1977-01-01
Two methods of observing the neutral line of the large-scale photospheric magnetic field are compared: neutral line positions inferred from H-alpha photographs (McIntosh and Nolte, 1975) and observations of the photospheric magnetic field made with low spatial resolution (three minutes) and high sensitivity using the Stanford magnetograph. The comparison is found to be very favorable.
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2016-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology, size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and small…
Using Relational Reasoning to Learn about Scientific Phenomena at Unfamiliar Scales
ERIC Educational Resources Information Center
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S.; Shipley, Thomas F.
2017-01-01
Many scientific theories and discoveries involve reasoning about extreme scales, removed from human experience, such as time in geology and size in nanoscience. Thus, understanding scale is central to science, technology, engineering, and mathematics. Unfortunately, novices have trouble understanding and comparing sizes of unfamiliar large and…
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
Measuring the Large-scale Solar Magnetic Field
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.
2017-12-01
The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.
Development and Validation of a Spanish Version of the Grit-S Scale
Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.
2018-01-01
This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705
The large-scale organization of metabolic networks
NASA Astrophysics Data System (ADS)
Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.
2000-10-01
In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.
Capabilities of the Large-Scale Sediment Transport Facility
2016-04-01
experiments in wave /current environments. INTRODUCTION: The LSTF (Figure 1) is a large-scale laboratory facility capable of simulating conditions...comparable to low- wave energy coasts. The facility was constructed to address deficiencies in existing methods for calculating longshore sediment...transport. The LSTF consists of a 30 m wide, 50 m long, 1.4 m deep basin. Waves are generated by four digitally controlled wave makers capable of producing
ERIC Educational Resources Information Center
van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf
2009-01-01
This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
Geller, Ruth; Bear, Todd M.; Foulds, Abigail L.; Duell, Jessica; Sharma, Ravi
2015-01-01
Background. Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. Objective. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Methods. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Results. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Conclusions. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates. PMID:26273310
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.
Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan
2017-08-15
Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.
NASA Astrophysics Data System (ADS)
Kröger, Knut; Creutzburg, Reiner
2013-05-01
The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.
mySyntenyPortal: an application package to construct websites for synteny block analysis.
Lee, Jongin; Lee, Daehwan; Sim, Mikang; Kwon, Daehong; Kim, Juyeon; Ko, Younhee; Kim, Jaebum
2018-06-05
Advances in sequencing technologies have facilitated large-scale comparative genomics based on whole genome sequencing. Constructing and investigating conserved genomic regions among multiple species (called synteny blocks) are essential in the comparative genomics. However, they require significant amounts of computational resources and time in addition to bioinformatics skills. Many web interfaces have been developed to make such tasks easier. However, these web interfaces cannot be customized for users who want to use their own set of genome sequences or definition of synteny blocks. To resolve this limitation, we present mySyntenyPortal, a stand-alone application package to construct websites for synteny block analyses by using users' own genome data. mySyntenyPortal provides both command line and web-based interfaces to build and manage websites for large-scale comparative genomic analyses. The websites can be also easily published and accessed by other users. To demonstrate the usability of mySyntenyPortal, we present an example study for building websites to compare genomes of three mammalian species (human, mouse, and cow) and show how they can be easily utilized to identify potential genes affected by genome rearrangements. mySyntenyPortal will contribute for extended comparative genomic analyses based on large-scale whole genome sequences by providing unique functionality to support the easy creation of interactive websites for synteny block analyses from user's own genome data.
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Large-scale particle acceleration by magnetic reconnection during solar flares
NASA Astrophysics Data System (ADS)
Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.
2017-12-01
Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.
Multi-level discriminative dictionary learning with application to large scale image classification.
Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua
2015-10-01
The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.
A family of conjugate gradient methods for large-scale nonlinear equations.
Feng, Dexiang; Sun, Min; Wang, Xueyong
2017-01-01
In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
Self-consistency tests of large-scale dynamics parameterizations for single-column modeling
Edman, Jacob P.; Romps, David M.
2015-03-18
Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less
Post-match Perceived Exertion, Feeling and Wellness in Professional Soccer Players.
Fessi, Mohamed Saifeddin; Moalla, Wassim
2018-01-18
The aim of this study was to assess post-match perceived exertion, feeling and wellness according to the match outcome (winning, drawing or losing) in professional soccer players. Twelve outfield players were followed during 52 official matches where the outcomes (win, draw or lose) were noted. Following each match players completed both a 10-point scale rating of perceived exertion (RPE) and an 11-point scale rating of perceived feeling. Rating of perceived sleep quality, stress, fatigue and muscle soreness were collected separately on a 7-point scale the day following each match. Player RPE was higher by a very largely magnitude following a loss compared to a draw or a win and higher by a small magnitude after a draw compared to a win. Players felt more pleasure after a win compared to a draw or loss and more displeasure after a loss compared to draw. The players reported a largely and moderately better-perceived sleep quality, less stress and fatigue following a win compared to draw or a loss, and a moderately bad-perceived sleep quality, higher stress and fatigue following a draw compared to a loss. In contrast, only a trivial-small change was observed in perceived muscle soreness between all outcomes. Matches outcomes moderately to largely affect RPE, perceived feeling, sleep quality, stress and fatigue whereas perceived muscle soreness remains high regardless of the match outcome. However, winning a match decreases the strain and improves both pleasure and wellness in professional soccer players.
Similarity spectra analysis of high-performance jet aircraft noise.
Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M
2013-04-01
Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.
Weinstein, Daniel; Launay, Jacques; Pearce, Eiluned; Dunbar, Robin I. M.; Stewart, Lauren
2016-01-01
Over our evolutionary history, humans have faced the problem of how to create and maintain social bonds in progressively larger groups compared to those of our primate ancestors. Evidence from historical and anthropological records suggests that group music-making might act as a mechanism by which this large-scale social bonding could occur. While previous research has shown effects of music making on social bonds in small group contexts, the question of whether this effect ‘scales up’ to larger groups is particularly important when considering the potential role of music for large-scale social bonding. The current study recruited individuals from a community choir that met in both small (n = 20 – 80) and large (a ‘megachoir’ combining individuals from the smaller subchoirs n = 232) group contexts. Participants gave self-report measures (via a survey) of social bonding and had pain threshold measurements taken (as a proxy for endorphin release) before and after 90 minutes of singing. Results showed that feelings of inclusion, connectivity, positive affect, and measures of endorphin release all increased across singing rehearsals and that the influence of group singing was comparable for pain thresholds in the large versus small group context. Levels of social closeness were found to be greater at pre- and post-levels for the small choir condition. However, the large choir condition experienced a greater change in social closeness as compared to the small condition. The finding that singing together fosters social closeness – even in large contexts where individuals are not known to each other – is consistent with evolutionary accounts that emphasize the role of music in social bonding, particularly in the context of creating larger cohesive groups than other primates are able to manage. PMID:27158219
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, P. D.; Brown, M. E.; Trumbo, S. K.
2017-01-01
We present spatially resolved spectroscopic observations of Europa’s surface at 3–4 μ m obtained with the near-infrared spectrograph and adaptive optics system on the Keck II telescope. These are the highest quality spatially resolved reflectance spectra of Europa’s surface at 3–4 μ m. The observations spatially resolve Europa’s large-scale compositional units at a resolution of several hundred kilometers. The spectra show distinct features and geographic variations associated with known compositional units; in particular, large-scale leading hemisphere chaos shows a characteristic longward shift in peak reflectance near 3.7 μ m compared to icy regions. These observations complement previous spectra of large-scalemore » chaos, and can aid efforts to identify the endogenous non-ice species.« less
Mackey, Aaron J; Pearson, William R
2004-10-01
Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.
Integration and segregation of large-scale brain networks during short-term task automatization
Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes
2016-01-01
The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095
Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System
NASA Astrophysics Data System (ADS)
Yue, Liu; Hang, Mend
2018-01-01
With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.
Completing the mechanical energy pathways in turbulent Rayleigh-Bénard convection.
Gayen, Bishakhdatta; Hughes, Graham O; Griffiths, Ross W
2013-09-20
A new, more complete view of the mechanical energy budget for Rayleigh-Bénard convection is developed and examined using three-dimensional numerical simulations at large Rayleigh numbers and Prandtl number of 1. The driving role of available potential energy is highlighted. The relative magnitudes of different energy conversions or pathways change significantly over the range of Rayleigh numbers Ra ~ 10(7)-10(13). At Ra < 10(7) small-scale turbulent motions are energized directly from available potential energy via turbulent buoyancy flux and kinetic energy is dissipated at comparable rates by both the large- and small-scale motions. In contrast, at Ra ≥ 10(10) most of the available potential energy goes into kinetic energy of the large-scale flow, which undergoes shear instabilities that sustain small-scale turbulence. The irreversible mixing is largely confined to the unstable boundary layer, its rate exactly equal to the generation of available potential energy by the boundary fluxes, and mixing efficiency is 50%.
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.
2010-01-01
This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.
Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme
NASA Astrophysics Data System (ADS)
Veljović, K.; Rajković, B.; Mesinger, F.
2009-04-01
Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
Cache Coherence Protocols for Large-Scale Multiprocessors
1990-09-01
and is compared with the other protocols for large-scale machines. In later analysis, this coherence method is designated by the acronym OCPD , which...private read misses 2 6 6 ( OCPD ) private write misses 2 6 6 Table 4.2: Transaction Types and Costs. the performance of the memory system. These...methodologies. Figure 4-2 shows the processor utiliza- tions of the Weather program, with special code in the dyn-nic post-mortem sched- 94 OCPD DlrINB
Comparing multi-module connections in membrane chromatography scale-up.
Yu, Zhou; Karkaria, Tishtar; Espina, Marianela; Hunjun, Manjeet; Surendran, Abera; Luu, Tina; Telychko, Julia; Yang, Yan-Ping
2015-07-20
Membrane chromatography is increasingly used for protein purification in the biopharmaceutical industry. Membrane adsorbers are often pre-assembled by manufacturers as ready-to-use modules. In large-scale protein manufacturing settings, the use of multiple membrane modules for a single batch is often required due to the large quantity of feed material. The question as to how multiple modules can be connected to achieve optimum separation and productivity has been previously approached using model proteins and mass transport theories. In this study, we compare the performance of multiple membrane modules in series and in parallel in the production of a protein antigen. Series connection was shown to provide superior separation compared to parallel connection in the context of competitive adsorption. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj
2014-01-01
Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082
Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki
2015-06-01
Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.
Gap Test Calibrations And Their Scalin
NASA Astrophysics Data System (ADS)
Sandusky, Harold
2012-03-01
Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations and their scaling are compared for other donors with PMMA gaps and for various donors in water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
Gap Test Calibrations and Their Scaling
NASA Astrophysics Data System (ADS)
Sandusky, Harold
2011-06-01
Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.
NASA Technical Reports Server (NTRS)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.;
2017-01-01
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climatemore » change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.« less
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Multilayered sandwich-like architecture containing large-scale faceted Al–Cu–Fe quasicrystal grains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Dongxia; He, Zhanbing, E-mail: hezhanbing@ustb.edu.cn
Faceted quasicrystals are structurally special compared with traditional crystals. Although the application of faceted quasicrystals has been expected, wide-scale application has not occurred owing to the limited exposure of the facets. Using a facile method of heat treatment, we synthesize a multilayered sandwich-like structure with each layer composed of large-scale pentagonal-dodecahedra of Al–Cu–Fe quasicrystals. Moreover, there are channels between the adjacent Al–Cu–Fe layers that serve to increase the exposure of the facets of quasicrystals. Scanning electron microscopy, transmission electron microscopy, and X-ray diffraction are used to characterize the multilayered architecture, and the generation mechanisms of this special structure are alsomore » discussed. - Highlights: • A multilayered sandwich-like structure is produced by a facile method. • Each layer is covered by large-scale faceted Al–Cu–Fe quasicrystals. • There are channels between the adjacent Al–Cu–Fe layers.« less
The Use of Weighted Graphs for Large-Scale Genome Analysis
Zhou, Fang; Toivonen, Hannu; King, Ross D.
2014-01-01
There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1993-01-01
Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.
Characteristics of Mini-Magnetospheres Formed by Paleo-Magnetic Fields of Mars
NASA Technical Reports Server (NTRS)
Ness, N. F.; Krymskii, A. M.; Crider, D. H.; Breus, T. K.; Acuna, M. H.; Hinson, D.; Barashyan, K. K.
2003-01-01
The intensely and non-uniformly magnetized crustal sources generate an effective large-scale magnetic field. In the Southern hemisphere the strongest crustal fields lead to the formation of large-scale mini-magnetospheres. In the Northern hemisphere, the crustal fields are rather weak and there are only isolated mini-magnetospheres. Re-connection with the interplanetary magnetic field (IMF) occurs in many localized regions. This may occur not only in cusp-like structures above nearly vertical field anomalies but also in halos extending several hundreds of kilometers from these sources. Re-connection will permit solar wind (SW) and more energetic particles to precipitate into and heat the neutral atmosphere. Electron density profiles of the ionosphere of Mars derived from radio occultation data obtained by the Radio Science Mars Global Surveyor (MGS) experiment are concentrated in the near polar regions. The effective scale-height of the neutral atmosphere density in the vicinity of the ionization peak has been derived for each of the profiles studied. The effective scale-heights have been compared with the crustal magnetic fields measured by the MGS Magnetometer/Electron Reflectometer (MAG/ER) experiment. A significant difference between the large-scale mini-magnetospheres and regions outside of them has been found. The neutral atmosphere is cooler inside the large-scale mini-magnetospheres. It appears that outside of the cusps the strong crustal magnetic fields prevent additional heating of the neutral atmosphere by direct interaction of the SW. The scale-height of the neutral atmosphere density derived from the experiment with the MGS Accelerometer has been compared with MAG/ER data. The scale-height was found to be usually larger than mean value near the boundaries of potential mini-magnetospheres and around cusps . It may indicate that the paleo-magnetic/IMF field re-connection is characteristic of the mini-magnetospheres at Mars.
Hunt, Geoffrey; Moloney, Molly; Fazio, Adam
2012-01-01
Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
Generation of a Large-scale Magnetic Field in a Convective Full-sphere Cross-helicity Dynamo
NASA Astrophysics Data System (ADS)
Pipin, V. V.; Yokoi, N.
2018-05-01
We study the effects of the cross-helicity in the full-sphere large-scale mean-field dynamo models of a 0.3 M ⊙ star rotating with a period of 10 days. In exploring several dynamo scenarios that stem from magnetic field generation by the cross-helicity effect, we found that the cross-helicity provides the natural generation mechanisms for the large-scale scale axisymmetric and nonaxisymmetric magnetic field. Therefore, the rotating stars with convective envelopes can produce a large-scale magnetic field generated solely due to the turbulent cross-helicity effect (we call it γ 2-dynamo). Using mean-field models we compare the properties of the large-scale magnetic field organization that stems from dynamo mechanisms based on the kinetic helicity (associated with the α 2 dynamos) and cross-helicity. For the fully convective stars, both generation mechanisms can maintain large-scale dynamos even for the solid body rotation law inside the star. The nonaxisymmetric magnetic configurations become preferable when the cross-helicity and the α-effect operate independently of each other. This corresponds to situations with purely γ 2 or α 2 dynamos. The combination of these scenarios, i.e., the γ 2 α 2 dynamo, can generate preferably axisymmetric, dipole-like magnetic fields at strengths of several kGs. Thus, we found a new dynamo scenario that is able to generate an axisymmetric magnetic field even in the case of a solid body rotation of the star. We discuss the possible applications of our findings to stellar observations.
Dynamic Smagorinsky model on anisotropic grids
NASA Technical Reports Server (NTRS)
Scotti, A.; Meneveau, C.; Fatica, M.
1996-01-01
Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.
NASA Astrophysics Data System (ADS)
Li, Jing; Song, Ningfang; Yang, Gongliu; Jiang, Rui
2016-07-01
In the initial alignment process of strapdown inertial navigation system (SINS), large misalignment angles always bring nonlinear problem, which can usually be processed using the scaled unscented Kalman filter (SUKF). In this paper, the problem of large misalignment angles in SINS alignment is further investigated, and the strong tracking scaled unscented Kalman filter (STSUKF) is proposed with fixed parameters to improve convergence speed, while these parameters are artificially constructed and uncertain in real application. To further improve the alignment stability and reduce the parameters selection, this paper proposes a fuzzy adaptive strategy combined with STSUKF (FUZZY-STSUKF). As a result, initial alignment scheme of large misalignment angles based on FUZZY-STSUKF is designed and verified by simulations and turntable experiment. The results show that the scheme improves the accuracy and convergence speed of SINS initial alignment compared with those based on SUKF and STSUKF.
Finite difference and Runge-Kutta methods for solving vibration problems
NASA Astrophysics Data System (ADS)
Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi
2017-11-01
The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Herbivorous fishes, ecosystem function and mobile links on coral reefs
NASA Astrophysics Data System (ADS)
Welsh, J. Q.; Bellwood, D. R.
2014-06-01
Understanding large-scale movement of ecologically important taxa is key to both species and ecosystem management. Those species responsible for maintaining functional connectivity between habitats are often called mobile links and are regarded as essential elements of resilience. By providing connectivity, they support resilience across spatial scales. Most marine organisms, including fishes, have long-term, biogeographic-scale connectivity through larval movement. Although most reef species are highly site attached after larval settlement, some taxa may also be able to provide rapid, reef-scale connectivity as adults. On coral reefs, the identity of such taxa and the extent of their mobility are not yet known. We use acoustic telemetry to monitor the movements of Kyphosus vaigiensis, one of the few reef fishes that feeds on adult brown macroalgae. Unlike other benthic herbivorous fish species, it also exhibits large-scale (>2 km) movements. Individual K. vaigiensis cover, on average, a 2.5 km length of reef (11 km maximum) each day. These large-scale movements suggest that this species may act as a mobile link, providing functional connectivity, should the need arise, and helping to support functional processes across habitats and spatial scales. An analysis of published studies of home ranges in reef fishes found a consistent relationship between home range size and body length. K. vaigiensis is the sole herbivore to depart significantly from the expected home range-body size relationship, with home range sizes more comparable to exceptionally mobile large pelagic predators rather than other reef herbivores. While the large-scale movements of K. vaigiensis reveal its potential capacity to enhance resilience over large areas, it also emphasizes the potential limitations of small marine reserves to protect some herbivore populations.
Large-Scale Weather Disturbances in Mars’ Southern Extratropics
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.
2015-11-01
Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.; Stone, C.M.; Krieg, R.D.
Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less
A large-scale perspective on stress-induced alterations in resting-state networks
NASA Astrophysics Data System (ADS)
Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron
2016-02-01
Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.
Electron drift in a large scale solid xenon
Yoo, J.; Jaskierny, W. F.
2015-08-21
A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less
TARGET Publication Guidelines | Office of Cancer Genomics
Like other NCI large-scale genomics initiatives, TARGET is a community resource project and data are made available rapidly after validation for use by other researchers. To act in accord with the Fort Lauderdale principles and support the continued prompt public release of large-scale genomic data prior to publication, researchers who plan to prepare manuscripts containing descriptions of TARGET pediatric cancer data that would be of comparable scope to an initial TARGET disease-specific comprehensive, global analysis publication, and journal editors who receive such manuscripts, are
NASA Astrophysics Data System (ADS)
Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis
2018-02-01
We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
Using MHD Models for Context for Multispacecraft Missions
NASA Astrophysics Data System (ADS)
Reiff, P. H.; Sazykin, S. Y.; Webster, J.; Daou, A.; Welling, D. T.; Giles, B. L.; Pollock, C.
2016-12-01
The use of global MHD models such as BATS-R-US to provide context to data from widely spaced multispacecraft mission platforms is gaining in popularity and in effectiveness. Examples are shown, primarily from the Magnetospheric Multiscale Mission (MMS) program compared to BATS-R-US. We present several examples of large-scale magnetospheric configuration changes such as tail dipolarization events and reconfigurations after a sector boundary crossing which are made much more easily understood by placing the spacecraft in the model fields. In general, the models can reproduce the large-scale changes observed by the various spacecraft but sometimes miss small-scale or rapid time changes.
Equating in Small-Scale Language Testing Programs
ERIC Educational Resources Information Center
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan
2017-01-01
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Spotted Towhee population dynamics in a riparian restoration context
Stacy L. Small; Frank R., III Thompson; Geoffery R. Geupel; John Faaborg
2007-01-01
We investigated factors at multiple scales that might influence nest predation risk for Spotted Towhees (Pipilo maculates) along the Sacramento River, California, within the context of large-scale riparian habitat restoration. We used the logistic-exposure method and Akaike's information criterion (AIC) for model selection to compare predator...
Does deep ocean mixing drive upwelling or downwelling of abyssal waters?
NASA Astrophysics Data System (ADS)
Ferrari, R. M.; McDougall, T. J.; Mashayek, A.; Nikurashin, M.; Campin, J. M.
2016-02-01
It is generally understood that small-scale mixing, such as is caused by breaking internal waves, drives upwelling of the densest ocean waters that sink to the ocean bottom at high latitudes. However the observational evidence that the turbulent fluxes generated by small-scale mixing in the stratified ocean interior are more vigorous close to the ocean bottom than above implies that small-scale mixing converts light waters into denser ones, thus driving a net sinking of abyssal water. Using a combination of numerical models and observations, it will be shown that abyssal waters return to the surface along weakly stratified boundary layers, where the small-scale mixing of density decays to zero. The net ocean meridional overturning circulation is thus the small residual of a large sinking of waters, driven by small-scale mixing in the stratified interior, and a comparably large upwelling, driven by the reduced small-scale mixing along the ocean boundaries.
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Godin, Bruno; Nagle, Nick; Sattler, Scott; Agneessens, Richard; Delcarte, Jérôme; Wolfrum, Edward
2016-01-01
For biofuel production processes to be economically efficient, it is essential to maximize the production of monomeric carbohydrates from the structural carbohydrates of feedstocks. One strategy for maximizing carbohydrate production is to identify less recalcitrant feedstock cultivars by performing some type of experimental screening on a large and diverse set of candidate materials, or by identifying genetic modifications (random or directed mutations or transgenic plants) that provide decreased recalcitrance. Economic efficiency can also be increased using additional pretreatment processes such as deacetylation, which uses dilute NaOH to remove the acetyl groups of hemicellulose prior to dilute acid pretreatment. In this work, we used a laboratory-scale screening tool that mimics relevant thermochemical pretreatment conditions to compare the total sugar yield of three near-isogenic brown midrib ( bmr ) mutant lines and the wild-type (WT) sorghum cultivar. We then compared results obtained from the laboratory-scale screening pretreatment assay to a large-scale pretreatment system. After pretreatment and enzymatic hydrolysis, the bmr mutants had higher total sugar yields than the WT sorghum cultivar. Increased pretreatment temperatures increased reactivity for all sorghum samples reducing the differences observed at lower reaction temperatures. Deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples, and reduced the differences in total sugar yields among them, but solubilized a sizable fraction of the non-structural carbohydrates. The general trends of increased total sugar yield in the bmr mutant compared to the WT seen at the laboratory scale were observed at the large-scale system. However, in the larger reactor system, the measured total sugar yields were lower and the difference in total sugar yield between the WT and bmr sorghum was larger. Sorghum bmr mutants, which have a reduced lignin content showed higher total sugar yields than the WT cultivar after dilute acid pretreatment and enzymatic hydrolysis. Deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples. However, since deacetylation also solubilizes a large fraction of the non-structural carbohydrates, the ability to derive value from these solubilized sugars will depend greatly on the proposed conversion process.
Godin, Bruno; Nagle, Nick; Sattler, Scott; ...
2016-11-21
For biofuel production processes to be economically efficient, it is essential to maximize the production of monomeric carbohydrates from the structural carbohydrates of feedstocks. One strategy for maximizing carbohydrate production is to identify less recalcitrant feedstock cultivars by performing some type of experimental screening on a large and diverse set of candidate materials, or by identifying genetic modifications (random or directed mutations or transgenic plants) that provide decreased recalcitrance. Economic efficiency can also be increased using additional pretreatment processes such as deacetylation, which uses dilute NaOH to remove the acetyl groups of hemicellulose prior to dilute acid pretreatment. In thismore » work, we used a laboratory-scale screening tool that mimics relevant thermochemical pretreatment conditions to compare the total sugar yield of three near-isogenic brown midrib (bmr) mutant lines and the wild-type (WT) sorghum cultivar. We then compared results obtained from the laboratory-scale screening pretreatment assay to a large-scale pretreatment system. After pretreatment and enzymatic hydrolysis, the bmr mutants had higher total sugar yields than the WT sorghum cultivar. Increased pretreatment temperatures increased reactivity for all sorghum samples reducing the differences observed at lower reaction temperatures. Deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples, and reduced the differences in total sugar yields among them, but solubilized a sizable fraction of the non-structural carbohydrates. The general trends of increased total sugar yield in the bmr mutant compared to the WT seen at the laboratory scale were observed at the large-scale system. However, in the larger reactor system, the measured total sugar yields were lower and the difference in total sugar yield between the WT and bmr sorghum was larger. Sorghum bmr mutants, which have a reduced lignin content showed higher total sugar yields than the WT cultivar after dilute acid pretreatment and enzymatic hydrolysis. In conclusion, deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples. However, since deacetylation also solubilizes a large fraction of the non-structural carbohydrates, the ability to derive value from these solubilized sugars will depend greatly on the proposed conversion process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godin, Bruno; Nagle, Nick; Sattler, Scott
For biofuel production processes to be economically efficient, it is essential to maximize the production of monomeric carbohydrates from the structural carbohydrates of feedstocks. One strategy for maximizing carbohydrate production is to identify less recalcitrant feedstock cultivars by performing some type of experimental screening on a large and diverse set of candidate materials, or by identifying genetic modifications (random or directed mutations or transgenic plants) that provide decreased recalcitrance. Economic efficiency can also be increased using additional pretreatment processes such as deacetylation, which uses dilute NaOH to remove the acetyl groups of hemicellulose prior to dilute acid pretreatment. In thismore » work, we used a laboratory-scale screening tool that mimics relevant thermochemical pretreatment conditions to compare the total sugar yield of three near-isogenic brown midrib (bmr) mutant lines and the wild-type (WT) sorghum cultivar. We then compared results obtained from the laboratory-scale screening pretreatment assay to a large-scale pretreatment system. After pretreatment and enzymatic hydrolysis, the bmr mutants had higher total sugar yields than the WT sorghum cultivar. Increased pretreatment temperatures increased reactivity for all sorghum samples reducing the differences observed at lower reaction temperatures. Deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples, and reduced the differences in total sugar yields among them, but solubilized a sizable fraction of the non-structural carbohydrates. The general trends of increased total sugar yield in the bmr mutant compared to the WT seen at the laboratory scale were observed at the large-scale system. However, in the larger reactor system, the measured total sugar yields were lower and the difference in total sugar yield between the WT and bmr sorghum was larger. Sorghum bmr mutants, which have a reduced lignin content showed higher total sugar yields than the WT cultivar after dilute acid pretreatment and enzymatic hydrolysis. In conclusion, deacetylation prior to dilute acid pretreatment increased the total sugar yield for all four sorghum samples. However, since deacetylation also solubilizes a large fraction of the non-structural carbohydrates, the ability to derive value from these solubilized sugars will depend greatly on the proposed conversion process.« less
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Lagrangian-averaged model for magnetohydrodynamic turbulence and the absence of bottlenecks.
Pietarila Graham, Jonathan; Mininni, Pablo D; Pouquet, Annick
2009-07-01
We demonstrate that, for the case of quasiequipartition between the velocity and the magnetic field, the Lagrangian-averaged magnetohydrodynamics (LAMHD) alpha model reproduces well both the large-scale and the small-scale properties of turbulent flows; in particular, it displays no increased (superfilter) bottleneck effect with its ensuing enhanced energy spectrum at the onset of the subfilter scales. This is in contrast to the case of the neutral fluid in which the Lagrangian-averaged Navier-Stokes alpha model is somewhat limited in its applications because of the formation of spatial regions with no internal degrees of freedom and subsequent contamination of superfilter-scale spectral properties. We argue that, as the Lorentz force breaks the conservation of circulation and enables spectrally nonlocal energy transfer (associated with Alfvén waves), it is responsible for the absence of a viscous bottleneck in magnetohydrodynamics (MHD), as compared to the fluid case. As LAMHD preserves Alfvén waves and the circulation properties of MHD, there is also no (superfilter) bottleneck found in LAMHD, making this method capable of large reductions in required numerical degrees of freedom; specifically, we find a reduction factor of approximately 200 when compared to a direct numerical simulation on a large grid of 1536;{3} points at the same Reynolds number.
Disaggregated Effects of Device on Score Comparability
ERIC Educational Resources Information Center
Davis, Laurie; Morrison, Kristin; Kong, Xiaojing; McBride, Yuanyuan
2017-01-01
The use of tablets for large-scale testing programs has transitioned from concept to reality for many state testing programs. This study extended previous research on score comparability between tablets and computers with high school students to compare score distributions across devices for reading, math, and science and to evaluate device…
Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patchett, John M; Ahrens, James P; Lo, Li - Ta
2010-10-15
Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less
Large Eddy Simulation of a Turbulent Jet
NASA Technical Reports Server (NTRS)
Webb, A. T.; Mansour, Nagi N.
2001-01-01
Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.
The HI Content of Galaxies as a Function of Local Density and Large-Scale Environment
NASA Astrophysics Data System (ADS)
Thoreen, Henry; Cantwell, Kelly; Maloney, Erin; Cane, Thomas; Brough Morris, Theodore; Flory, Oscar; Raskin, Mark; Crone-Odekon, Mary; ALFALFA Team
2017-01-01
We examine the HI content of galaxies as a function of environment, based on a catalogue of 41527 galaxies that are part of the 70% complete Arecibo Legacy Fast-ALFA (ALFALFA) survey. We use nearest-neighbor methods to characterize local environment, and a modified version of the algorithm developed for the Galaxy and Mass Assembly (GAMA) survey to classify large-scale environment as group, filament, tendril, or void. We compare the HI content in these environments using statistics that include both HI detections and the upper limits on detections from ALFALFA. The large size of the sample allows to statistically compare the HI content in different environments for early-type galaxies as well as late-type galaxies. This work is supported by NSF grants AST-1211005 and AST-1637339, the Skidmore Faculty-Student Summer Research program, and the Schupf Scholars program.
NASA Astrophysics Data System (ADS)
Onishchenko, O. G.; Pokhotelov, O. A.; Astafieva, N. M.
2008-06-01
The review deals with a theoretical description of the generation of zonal winds and vortices in a turbulent barotropic atmosphere. These large-scale structures largely determine the dynamics and transport processes in planetary atmospheres. The role of nonlinear effects on the formation of mesoscale vortical structures (cyclones and anticyclones) is examined. A new mechanism for zonal wind generation in planetary atmospheres is discussed. It is based on the parametric generation of convective cells by finite-amplitude Rossby waves. Weakly turbulent spectra of Rossby waves are considered. The theoretical results are compared to the results of satellite microwave monitoring of the Earth's atmosphere.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Microfilament-Eruption Mechanism for Solar Spicules
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, Ronald L.
2017-01-01
Recent studies indicate that solar coronal jets result from eruption of small-scale filaments, or "minifilaments" (Sterling et al. 2015, Nature, 523, 437; Panesar et al. ApJL, 832L, 7). In many aspects, these coronal jets appear to be small-scale versions of long-recognized large-scale solar eruptions that are often accompanied by eruption of a large-scale filament and that produce solar flares and coronal mass ejections (CMEs). In coronal jets, a jet-base bright point (JBP) that is often observed to accompany the jet and that sits on the magnetic neutral line from which the minifilament erupts, corresponds to the solar flare of larger-scale eruptions that occurs at the neutral line from which the large-scale filament erupts. Large-scale eruptions are relatively uncommon (approximately 1 per day) and occur with relatively large-scale erupting filaments (approximately 10 (sup 5) kilometers long). Coronal jets are more common (approximately 100s per day), but occur from erupting minifilaments of smaller size (approximately 10 (sup 4) kilometers long). It is known that solar spicules are much more frequent (many millions per day) than coronal jets. Just as coronal jets are small-scale versions of large-scale eruptions, here we suggest that solar spicules might in turn be small-scale versions of coronal jets; we postulate that the spicules are produced by eruptions of "microfilaments" of length comparable to the width of observed spicules (approximately 300 kilometers). A plot of the estimated number of the three respective phenomena (flares/CMEs, coronal jets, and spicules) occurring on the Sun at a given time, against the average sizes of erupting filaments, minifilaments, and the putative microfilaments, results in a size distribution that can be fitted with a power-law within the estimated uncertainties. The counterparts of the flares of large-scale eruptions and the JBPs of jets might be weak, pervasive, transient brightenings observed in Hinode/CaII images, and the production of spicules by microfilament eruptions might explain why spicules spin, as do coronal jets. The expected small-scale neutral lines from which the microfilaments would be expected to erupt would be difficult to detect reliably with current instrumentation, but might be apparent with instrumentation of the near future. A full report on this work appears in Sterling and Moore 2016, ApJL, 829, L9.
NASA Astrophysics Data System (ADS)
Wagener, T.
2017-12-01
Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA
NASA Astrophysics Data System (ADS)
Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing
Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.
Can limited area NWP and/or RCM models improve on large scales inside their domain?
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.
Allometric scaling of UK urban emissions: interpretation and implications for air quality management
NASA Astrophysics Data System (ADS)
MacKenzie, Rob; Barnes, Matt; Whyatt, Duncan; Hewitt, Nick
2016-04-01
Allometry uncovers structures and patterns by relating the characteristics of complex systems to a measure of scale. We present an allometric analysis of air quality for UK urban settlements, beginning with emissions and moving on to consider air concentrations. We consider both airshed-average 'urban background' concentrations (cf. those derived from satellites for NO2) and local pollution 'hotspots'. We show that there is a strong and robust scaling (with respect to population) of the non-point-source emissions of the greenhouse gases carbon dioxide and methane, as well as the toxic pollutants nitrogen dioxide, PM2.5, and 1,3-butadiene. The scaling of traffic-related emissions is not simply a reflection of road length, but rather results from the socio-economic patterning of road-use. The recent controversy regarding diesel vehicle emissions is germane to our study but does not affect our overall conclusions. We next develop an hypothesis for the population-scaling of airshed-average air concentrations, with which we demonstrate that, although average air quality is expected to be worse in large urban centres compared to small urban centres, the overall effect is an economy of scale (i.e., large cities reduce the overall burden of emissions compared to the same population spread over many smaller urban settlements). Our hypothesis explains satellite-derived observations of airshed-average urban NO2 concentrations. The theory derived also explains which properties of nature-based solutions (urban greening) can make a significant contribution at city scale, and points to a hitherto unforeseen opportunity to make large cities cleaner than smaller cities in absolute terms with respect to their airshed-average pollutant concentration.
NASA Astrophysics Data System (ADS)
Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.
2002-11-01
The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.
NASA Astrophysics Data System (ADS)
Tang, Zhanqi; Jiang, Nan; Zheng, Xiaobo; Wu, Yanhua
2016-05-01
Hot-wire measurements on a turbulent boundary layer flow perturbed by a wall-mounted cylinder roughness element (CRE) are carried out in this study. The cylindrical element protrudes into the logarithmic layer, which is similar to those employed in turbulent boundary layers by Ryan et al. (AIAA J 49:2210-2220, 2011. doi: 10.2514/1.j051012) and Zheng and Longmire (J Fluid Mech 748:368-398, 2014. doi: 10.1017/jfm.2014.185) and in turbulent channel flow by Pathikonda and Christensen (AIAA J 53:1-10, 2014. doi: 10.2514/1.j053407). The similar effects on both the mean velocity and Reynolds stress are observed downstream of the CRE perturbation. The series of hot-wire data are decomposed into large- and small-scale fluctuations, and the characteristics of large- and small-scale bursting process are observed, by comparing the bursting duration, period and frequency between CRE-perturbed case and unperturbed case. It is indicated that the CRE perturbation performs the significant impact on the large- and small-scale structures, but within the different impact scenario. Moreover, the large-scale bursting process imposes a modulation on the bursting events of small-scale fluctuations and the overall trend of modulation is not essentially sensitive to the present CRE perturbation, even the modulation extent is modified. The conditionally averaging fluctuations are also plotted, which further confirms the robustness of the bursting modulation in the present experiments.
A geographic comparison of selected large-scale planetary surface features
NASA Technical Reports Server (NTRS)
Meszaros, S. P.
1984-01-01
Photographic and cartographic comparisons of geographic features on Mercury, the Moon, Earth, Mars, Ganymede, Callisto, Mimas, and Tethys are presented. Planetary structures caused by impacts, volcanism, tectonics, and other natural forces are included. Each feature is discussed individually and then those of similar origin are compared at the same scale.
On the Cross-Country Comparability of Indicators of Socioeconomic Resources in PISA
ERIC Educational Resources Information Center
Pokropek, Artur; Borgonovi, Francesca; McCormick, Carina
2017-01-01
Large-scale international assessments rely on indicators of the resources that students report having in their homes to capture the financial capital of their families. The scaling methodology currently used to develop the Programme for International Student Assessment (PISA) background indices is designed to maximize within-country comparability…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machicoane, Nathanaël; Volk, Romain
We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less
Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo
2012-02-08
Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
A 100,000 Scale Factor Radar Range.
Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser
2017-12-19
The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.
Hofmeister series salts enhance purification of plasmid DNA by non-ionic detergents
Lezin, George; Kuehn, Michael R.; Brunelli, Luca
2011-01-01
Ion-exchange chromatography is the standard technique used for plasmid DNA purification, an essential molecular biology procedure. Non-ionic detergents (NIDs) have been used for plasmid DNA purification, but it is unclear whether Hofmeister series salts (HSS) change the solubility and phase separation properties of specific NIDs, enhancing plasmid DNA purification. After scaling-up NID-mediated plasmid DNA isolation, we established that NIDs in HSS solutions minimize plasmid DNA contamination with protein. In addition, large-scale NID/HSS solutions eliminated LPS contamination of plasmid DNA more effectively than Qiagen ion-exchange columns. Large-scale NID isolation/NID purification generated increased yields of high quality DNA compared to alkali isolation/column purification. This work characterizes how HSS enhance NID-mediated plasmid DNA purification, and demonstrates that NID phase transition is not necessary for LPS removal from plasmid DNA. Specific NIDs such as IGEPAL CA-520 can be utilized for rapid, inexpensive and efficient laboratory-based large-scale plasmid DNA purification, outperforming Qiagen-based column procedures. PMID:21351074
Large-Scale, High-Resolution Neurophysiological Maps Underlying fMRI of Macaque Temporal Lobe
Papanastassiou, Alex M.; DiCarlo, James J.
2013-01-01
Maps obtained by functional magnetic resonance imaging (fMRI) are thought to reflect the underlying spatial layout of neural activity. However, previous studies have not been able to directly compare fMRI maps to high-resolution neurophysiological maps, particularly in higher level visual areas. Here, we used a novel stereo microfocal x-ray system to localize thousands of neural recordings across monkey inferior temporal cortex (IT), construct large-scale maps of neuronal object selectivity at subvoxel resolution, and compare those neurophysiology maps with fMRI maps from the same subjects. While neurophysiology maps contained reliable structure at the sub-millimeter scale, fMRI maps of object selectivity contained information at larger scales (>2.5 mm) and were only partly correlated with raw neurophysiology maps collected in the same subjects. However, spatial smoothing of neurophysiology maps more than doubled that correlation, while a variety of alternative transforms led to no significant improvement. Furthermore, raw spiking signals, once spatially smoothed, were as predictive of fMRI maps as local field potential signals. Thus, fMRI of the inferior temporal lobe reflects a spatially low-passed version of neurophysiology signals. These findings strongly validate the widespread use of fMRI for detecting large (>2.5 mm) neuronal domains of object selectivity but show that a complete understanding of even the most pure domains (e.g., faces vs nonface objects) requires investigation at fine scales that can currently only be obtained with invasive neurophysiological methods. PMID:24048850
ERIC Educational Resources Information Center
Chudagr, Amita; Luschei, Thomas F.
2016-01-01
The objective of this commentary is to call attention to the feasibility and importance of large-scale, systematic, quantitative analysis in international and comparative education research. We contend that although many existing databases are under- or unutilized in quantitative international-comparative research, these resources present the…
NASA Astrophysics Data System (ADS)
Venegas-González, Alejandro; Chagas, Matheus Peres; Anholetto Júnior, Claudio Roberto; Alvares, Clayton Alcarde; Roig, Fidel Alejandro; Tomazello Filho, Mario
2016-01-01
We explored the relationship between tree growth in two tropical species and local and large-scale climate variability in Southeastern Brazil. Tree ring width chronologies of Tectona grandis (teak) and Pinus caribaea (Caribbean pine) trees were compared with local (Water Requirement Satisfaction Index—WRSI, Standardized Precipitation Index—SPI, and Palmer Drought Severity Index—PDSI) and large-scale climate indices that analyze the equatorial pacific sea surface temperature (Trans-Niño Index-TNI and Niño-3.4-N3.4) and atmospheric circulation variations in the Southern Hemisphere (Antarctic Oscillation-AAO). Teak trees showed positive correlation with three indices in the current summer and fall. A significant correlation between WRSI index and Caribbean pine was observed in the dry season preceding tree ring formation. The influence of large-scale climate patterns was observed only for TNI and AAO, where there was a radial growth reduction in months preceding the growing season with positive values of the TNI in teak trees and radial growth increase (decrease) during December (March) to February (May) of the previous (current) growing season with positive phase of the AAO in teak (Caribbean pine) trees. The development of a new dendroclimatological study in Southeastern Brazil sheds light to local and large-scale climate influence on tree growth in recent decades, contributing in future climate change studies.
Large-scale impacts of herbivores on the structural diversity of African savannas
Asner, Gregory P.; Levick, Shaun R.; Kennedy-Bowdoin, Ty; Knapp, David E.; Emerson, Ruth; Jacobson, James; Colgan, Matthew S.; Martin, Roberta E.
2009-01-01
African savannas are undergoing management intensification, and decision makers are increasingly challenged to balance the needs of large herbivore populations with the maintenance of vegetation and ecosystem diversity. Ensuring the sustainability of Africa's natural protected areas requires information on the efficacy of management decisions at large spatial scales, but often neither experimental treatments nor large-scale responses are available for analysis. Using a new airborne remote sensing system, we mapped the three-dimensional (3-D) structure of vegetation at a spatial resolution of 56 cm throughout 1640 ha of savanna after 6-, 22-, 35-, and 41-year exclusions of herbivores, as well as in unprotected areas, across Kruger National Park in South Africa. Areas in which herbivores were excluded over the short term (6 years) contained 38%–80% less bare ground compared with those that were exposed to mammalian herbivory. In the longer-term (> 22 years), the 3-D structure of woody vegetation differed significantly between protected and accessible landscapes, with up to 11-fold greater woody canopy cover in the areas without herbivores. Our maps revealed 2 scales of ecosystem response to herbivore consumption, one broadly mediated by geologic substrate and the other mediated by hillslope-scale variation in soil nutrient availability and moisture conditions. Our results are the first to quantitatively illustrate the extent to which herbivores can affect the 3-D structural diversity of vegetation across large savanna landscapes. PMID:19258457
Channel optimization of high-intensity laser beams in millimeter-scale plasmas.
Ceurvorst, L; Savin, A; Ratan, N; Kasim, M F; Sadler, J; Norreys, P A; Habara, H; Tanaka, K A; Zhang, S; Wei, M S; Ivancic, S; Froula, D H; Theobald, W
2018-04-01
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>10^{18}W/cm^{2}) kilojoule laser pulses through large density scale length (∼390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.
Channel optimization of high-intensity laser beams in millimeter-scale plasmas
NASA Astrophysics Data System (ADS)
Ceurvorst, L.; Savin, A.; Ratan, N.; Kasim, M. F.; Sadler, J.; Norreys, P. A.; Habara, H.; Tanaka, K. A.; Zhang, S.; Wei, M. S.; Ivancic, S.; Froula, D. H.; Theobald, W.
2018-04-01
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>1018W/cm 2 ) kilojoule laser pulses through large density scale length (˜390 -570 μ m ) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Axions, neutrinos and strings: The formation of structure in an SO(10) universe
NASA Technical Reports Server (NTRS)
Stecker, F. W.
1984-01-01
In a class of grand unified theories containing SO(10), cosmologically significant axion and neutrino energy densities are obtainable naturally. To obtain large scale structure, both components of dark matter are considered to exist with comparable energy densities. To obtain large scale structure, inflationary and non-inflationary scenarios are considered, as well as scenarios with and without vacuum strings. It is shown that inflation may be compatible with recent observations of the mass density within galaxy clusters and superclusters, especially if strings are present.
Wake profile measurements of fixed and oscillating flaps
NASA Technical Reports Server (NTRS)
Owen, F. K.
1984-01-01
Although the potential of laser velocimetry for the non-intrusive measurement of complex shear flows has long been recognized, there have been few applications in other small, closely controlled laboratory situations. Measurements in large scale, high speed wind tunnels are still a complex task. To support a study of periodic flows produced by an oscillating edge flap in the Ames eleven foot wind tunnel, this study was done. The potential for laser velocimeter measurements in large scale production facilities are evaluated. The results with hot wire flow field measurements are compared.
Axions, neutrinos and strings - The formation of structure in an SO(10) universe
NASA Technical Reports Server (NTRS)
Stecker, F. W.
1986-01-01
In a class of grand unified theories containing SO(10), cosmologically significant axion and neutrino energy densities are obtainable naturally. To obtain large scale structure, both components of dark matter are considered to exist with comparable energy densities. To obtain large scale structure, inflationary and non-inflationary scenarios are considered, as well as scenarios with and without vacuum strings. It is shown that inflation may be compatible with recent observations of the mass density within galaxy clusters and superclusters, especially if strings are present.
A parallel orbital-updating based plane-wave basis method for electronic structure calculations
NASA Astrophysics Data System (ADS)
Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui
2017-11-01
Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.
Global properties of the plasma in the outer heliosphere. I - Large-scale structure and evolution
NASA Technical Reports Server (NTRS)
Barnes, A.; Mihalov, J. D.; Gazis, P. R.; Lazarus, A. J.; Belcher, J. W.; Gordon, G. S., Jr.; Mcnutt, R. L., Jr.
1992-01-01
Pioneers 10 and 11, and Voyager 2, have active plasma analyzers as they proceed through heliocentric distances of the order of 30-50 AU, facilitating comparative studies of the global character of the outer solar wind and its variation over the solar cycle. Careful study of these data show that wind ion temperature remains constant beyond 15 AU, and that there may be large-scale variations of temperature with celestial longitude and heliographic latitude. There has thus far been no indication of a heliospheric terminal shock.
Mesh refinement in a two-dimensional large eddy simulation of a forced shear layer
NASA Technical Reports Server (NTRS)
Claus, R. W.; Huang, P. G.; Macinnes, J. M.
1989-01-01
A series of large eddy simulations are made of a forced shear layer and compared with experimental data. Several mesh densities were examined to separate the effect of numerical inaccuracy from modeling deficiencies. The turbulence model that was used to represent small scale, 3-D motions correctly predicted some gross features of the flow field, but appears to be structurally incorrect. The main effect of mesh refinement was to act as a filter on the scale of vortices that developed from the inflow boundary conditions.
Large-scale shell-model calculations for 32-39P isotopes
NASA Astrophysics Data System (ADS)
Srivastava, P. C.; Hirsch, J. G.; Ermamatov, M. J.; Kota, V. K. B.
2012-10-01
In this work, the structure of 32-39P isotopes is described in the framework of stateof-the-art large-scale shell-model calculations, employing the code ANTOINE with three modern effective interactions: SDPF-U, SDPF-NR and the extended pairing plus quadrupole-quadrupoletype forces with inclusion of monopole interaction (EPQQM). Protons are restricted to fill the sd shell, while neutrons are active in the sd - pf valence space. Results for positive and negative level energies and electromagnetic observables are compared with the available experimental data.
Can a science-based definition of acupuncture improve clinical outcomes?
Priebe, Ted; Stumpf, Steven H; Zalunardo, Rod
2017-05-01
Research on acupuncture has been muddled by attempts to bridge the ancient with the modern. Barriers to effectiveness research are reflected in recurring conflicts that include disagreement on use of the most basic terms, lack of standard intervention controls, and the absence of functional measures for assessing treatment effect. Acupuncture research has stalled at the "placebo barrier" wherein acupuncture is "no better than placebo." The most widely recognized comparative effectiveness research in acupuncture does not compare acupuncture treatment protocols within groups, thereby, mutating large scale effectiveness studies into large scale efficacy trials. Too often research in acupuncture attempts to tie outcomes to traditional belief systems thereby limiting usefulness of the research. The acupuncture research paradigm needs to focus more closely on a scientific definition of treatments and outcomes that compare protocols in terms of prevalent clinical issues such as relative effectiveness for treating pain.
Numerical study of axial turbulent flow over long cylinders
NASA Technical Reports Server (NTRS)
Neves, J. C.; Moin, P.; Moser, R. D.
1991-01-01
The effects of transverse curvature are investigated by means of direct numerical simulations of turbulent axial flow over cylinders. Two cases of Reynolds number of about 3400 and layer-thickness-to-cylinder-radius ratios of 5 and 11 were simulated. All essential turbulence scales were resolved in both calculations, and a large number of turbulence statistics were computed. The results are compared with the plane channel results of Kim et al. (1987) and with experiments. With transverse curvature the skin friction coefficient increases and the turbulence statistics, when scaled with wall units, are lower than in the plane channel. The momentum equation provides a scaling that collapses the cylinder statistics, and allows the results to be interpreted in light of the plane channel flow. The azimuthal and radial length scales of the structures in the flow are of the order of the cylinder diameter. Boomerang-shaped structures with large spanwise length scales were observed in the flow.
Nanoliter-Scale Protein Crystallization and Screening with a Microfluidic Droplet Robot
Zhu, Ying; Zhu, Li-Na; Guo, Rui; Cui, Heng-Jun; Ye, Sheng; Fang, Qun
2014-01-01
Large-scale screening of hundreds or even thousands of crystallization conditions while with low sample consumption is in urgent need, in current structural biology research. Here we describe a fully-automated droplet robot for nanoliter-scale crystallization screening that combines the advantages of both automated robotics technique for protein crystallization screening and the droplet-based microfluidic technique. A semi-contact dispensing method was developed to achieve flexible, programmable and reliable liquid-handling operations for nanoliter-scale protein crystallization experiments. We applied the droplet robot in large-scale screening of crystallization conditions of five soluble proteins and one membrane protein with 35–96 different crystallization conditions, study of volume effects on protein crystallization, and determination of phase diagrams of two proteins. The volume for each droplet reactor is only ca. 4–8 nL. The protein consumption significantly reduces 50–500 fold compared with current crystallization stations. PMID:24854085
Nanoliter-scale protein crystallization and screening with a microfluidic droplet robot.
Zhu, Ying; Zhu, Li-Na; Guo, Rui; Cui, Heng-Jun; Ye, Sheng; Fang, Qun
2014-05-23
Large-scale screening of hundreds or even thousands of crystallization conditions while with low sample consumption is in urgent need, in current structural biology research. Here we describe a fully-automated droplet robot for nanoliter-scale crystallization screening that combines the advantages of both automated robotics technique for protein crystallization screening and the droplet-based microfluidic technique. A semi-contact dispensing method was developed to achieve flexible, programmable and reliable liquid-handling operations for nanoliter-scale protein crystallization experiments. We applied the droplet robot in large-scale screening of crystallization conditions of five soluble proteins and one membrane protein with 35-96 different crystallization conditions, study of volume effects on protein crystallization, and determination of phase diagrams of two proteins. The volume for each droplet reactor is only ca. 4-8 nL. The protein consumption significantly reduces 50-500 fold compared with current crystallization stations.
Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V
2008-10-01
International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
On identifying relationships between the flood scaling exponent and basin attributes.
Medhi, Hemanta; Tripathi, Shivam
2015-07-01
Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.
Determining erosion relevant soil characteristics with a small-scale rainfall simulator
NASA Astrophysics Data System (ADS)
Schindewolf, M.; Schmidt, J.
2009-04-01
The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.
Mouse Activity across Time Scales: Fractal Scenarios
Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.
2014-01-01
In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515
Zhou, Juntuo; Liu, Huiying; Liu, Yang; Liu, Jia; Zhao, Xuyang; Yin, Yuxin
2016-04-19
Recent advances in mass spectrometers which have yielded higher resolution and faster scanning speeds have expanded their application in metabolomics of diverse diseases. Using a quadrupole-Orbitrap LC-MS system, we developed an efficient large-scale quantitative method targeting 237 metabolites involved in various metabolic pathways using scheduled, parallel reaction monitoring (PRM). We assessed the dynamic range, linearity, reproducibility, and system suitability of the PRM assay by measuring concentration curves, biological samples, and clinical serum samples. The quantification performances of PRM and MS1-based assays in Q-Exactive were compared, and the MRM assay in QTRAP 6500 was also compared. The PRM assay monitoring 237 polar metabolites showed greater reproducibility and quantitative accuracy than MS1-based quantification and also showed greater flexibility in postacquisition assay refinement than the MRM assay in QTRAP 6500. We present a workflow for convenient PRM data processing using Skyline software which is free of charge. In this study we have established a reliable PRM methodology on a quadrupole-Orbitrap platform for evaluation of large-scale targeted metabolomics, which provides a new choice for basic and clinical metabolomics study.
NASA Astrophysics Data System (ADS)
Happel, T.; Navarro, A. Bañón; Conway, G. D.; Angioni, C.; Bernert, M.; Dunne, M.; Fable, E.; Geiger, B.; Görler, T.; Jenko, F.; McDermott, R. M.; Ryter, F.; Stroth, U.
2015-03-01
Additional electron cyclotron resonance heating (ECRH) is used in an ion-temperature-gradient instability dominated regime to increase R / L Te in order to approach the trapped-electron-mode instability regime. The radial ECRH deposition location determines to a large degree the effect on R / L Te . Accompanying scale-selective turbulence measurements at perpendicular wavenumbers between k⊥ = 4-18 cm-1 (k⊥ρs = 0.7-4.2) show a pronounced increase of large-scale density fluctuations close to the ECRH radial deposition location at mid-radius, along with a reduction in phase velocity of large-scale density fluctuations. Measurements are compared with results from linear and non-linear flux-matched gyrokinetic (GK) simulations with the gyrokinetic code GENE. Linear GK simulations show a reduction of phase velocity, indicating a pronounced change in the character of the dominant instability. Comparing measurement and non-linear GK simulation, as a central result, agreement is obtained in the shape of radial turbulence level profiles. However, the turbulence intensity is increasing with additional heating in the experiment, while gyrokinetic simulations show a decrease.
Biased Tracers in Redshift Space in the EFT of Large-Scale Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Ashley; Senatore, Leonardo; Jennings, Elise
2016-10-28
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipolesmore » $$\\ell=0,2$$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $$\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$$ and $$\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $$k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.« less
Galaxy clustering and the origin of large-scale flows
NASA Technical Reports Server (NTRS)
Juszkiewicz, R.; Yahil, A.
1989-01-01
Peebles's 'cosmic virial theorem' is extended from its original range of validity at small separations, where hydrostatic equilibrium holds, to large separations, in which linear gravitational stability theory applies. The rms pairwise velocity difference at separation r is shown to depend on the spatial galaxy correlation function xi(x) only for x less than r. Gravitational instability theory can therefore be tested by comparing the two up to the maximum separation for which both can reliably be determined, and there is no dependence on the poorly known large-scale density and velocity fields. With the expected improvement in the data over the next few years, however, this method should yield a reliable determination of omega.
Optimal estimation and scheduling in aquifer management using the rapid feedback control method
NASA Astrophysics Data System (ADS)
Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric
2017-12-01
Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
Higher-level simulations of turbulent flows
NASA Technical Reports Server (NTRS)
Ferziger, J. H.
1981-01-01
The fundamentals of large eddy simulation are considered and the approaches to it are compared. Subgrid scale models and the development of models for the Reynolds-averaged equations are discussed as well as the use of full simulation in testing these models. Numerical methods used in simulating large eddies, the simulation of homogeneous flows, and results from full and large scale eddy simulations of such flows are examined. Free shear flows are considered with emphasis on the mixing layer and wake simulation. Wall-bounded flow (channel flow) and recent work on the boundary layer are also discussed. Applications of large eddy simulation and full simulation in meteorological and environmental contexts are included along with a look at the direction in which work is proceeding and what can be expected from higher-level simulation in the future.
The Chandra Deep Wide-Field Survey: Completing the new generation of Chandra extragalactic surveys
NASA Astrophysics Data System (ADS)
Hickox, Ryan
2016-09-01
Chandra X-ray surveys have revolutionized our view of the growth of black holes across cosmic time. Recently, fundamental questions have emerged about the connection of AGN to their host large scale structures that clearly demand a wide, deep survey over a large area, comparable to the recent extensive Chandra surveys in smaller fields. We propose the Chandra Deep Wide-Field Survey (CDWFS) covering the central 6 sq. deg in the Bootes field, totaling 1.025 Ms (building on 550 ks from the HRC GTO program). CDWFS will efficiently probe a large cosmic volume, allowing us to carry out accurate new investigations of the connections between black holes and their large-scale structures, and will complete the next generation surveys that comprise a key part of Chandra's legacy.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Comparing Validity Evidence of Two ECERS-R Scoring Systems
ERIC Educational Resources Information Center
Zeng, Songtian
2017-01-01
Over 30 states have adopted the Early Childhood Environmental Rating Scale-Revised (ECERS-R) as a component of their program quality assessment systems, but the use of ECERS-R on such a large scale has raised important questions about implementation. One of the most pressing question centers upon decisions users must make between two scoring…
Effect of Carboxymethylation on the Rheological Properties of Hyaluronan
Wendling, Rian J.; Christensen, Amanda M.; Quast, Arthur D.; Atzet, Sarah K.; Mann, Brenda K.
2016-01-01
Chemical modifications made to hyaluronan to enable covalent crosslinking to form a hydrogel or to attach other molecules may alter the physical properties as well, which have physiological importance. Here we created carboxymethyl hyaluronan (CMHA) with varied degree of modification and investigated the effect on the viscosity of CMHA solutions. Viscosity decreased initially as modification increased, with a minimum viscosity for about 30–40% modification. This was followed by an increase in viscosity around 45–50% modification. The pH of the solution had a variable effect on viscosity, depending on the degree of carboxymethyl modification and buffer. The presence of phosphates in the buffer led to decreased viscosity. We also compared large-scale production lots of CMHA to lab-scale and found that large-scale required extended reaction times to achieve the same degree of modification. Finally, thiolated CMHA was disulfide crosslinked to create hydrogels with increased viscosity and shear-thinning aspects compared to CMHA solutions. PMID:27611817
Factors Affecting Volunteering among Older Rural and City Dwelling Adults in Australia
ERIC Educational Resources Information Center
Warburton, Jeni; Stirling, Christine
2007-01-01
In the absence of large scale Australian studies of volunteering among older adults, this study compared the relevance of two theoretical approaches--social capital theory and sociostructural resources theory--to predict voluntary activity in relation to a large national database. The paper explores volunteering by older people (aged 55+) in order…
Targeted enrichment strategies for next-generation plant biology
Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua Udall
2012-01-01
The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Large scale mass redistribution and surface displacement from GRACE and SLR
NASA Astrophysics Data System (ADS)
Cheng, M.; Ries, J. C.; Tapley, B. D.
2012-12-01
Mass transport between the atmosphere, ocean and solid earth results in the temporal variations in the Earth gravity field and loading induced deformation of the Earth. Recent space-borne observations, such as GRACE mission, are providing extremely high precision temporal variations of gravity field. The results from 10-yr GRACE data has shown a significant annual variations of large scale vertical and horizontal displacements occurring over the Amazon, Himalayan region and South Asia, African, and Russian with a few mm amplitude. Improving understanding from monitoring and modeling of the large scale mass redistribution and the Earth's response are a critical for all studies in the geosciences, in particular for determination of Terrestrial Reference System (TRS), including geocenter motion. This paper will report results for the observed seasonal variations in the 3-dimentional surface displacements of SLR and GPS tracking stations and compare with the prediction from time series of GRACE monthly gravity solution.
NASA Astrophysics Data System (ADS)
Peruani, Fernando
2016-11-01
Bacteria, chemically-driven rods, and motility assays are examples of active (i.e. self-propelled) Brownian rods (ABR). The physics of ABR, despite their ubiquity in experimental systems, remains still poorly understood. Here, we review the large-scale properties of collections of ABR moving in a dissipative medium. We address the problem by presenting three different models, of decreasing complexity, which we refer to as model I, II, and III, respectively. Comparing model I, II, and III, we disentangle the role of activity and interactions. In particular, we learn that in two dimensions by ignoring steric or volume exclusion effects, large-scale nematic order seems to be possible, while steric interactions prevent the formation of orientational order at large scales. The macroscopic behavior of ABR results from the interplay between active stresses and local alignment. ABR exhibit, depending on where we locate ourselves in parameter space, a zoology of macroscopic patterns that ranges from polar and nematic bands to dynamic aggregates.
NASA Astrophysics Data System (ADS)
Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua
2015-12-01
Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey
NASA Astrophysics Data System (ADS)
Squires, Gordon K.; Lubin, L. M.; Gal, R. R.
2007-05-01
We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.
Experimental investigation of large-scale vortices in a freely spreading gravity current
NASA Astrophysics Data System (ADS)
Yuan, Yeping; Horner-Devine, Alexander R.
2017-10-01
A series of laboratory experiments are presented to compare the dynamics of constant-source buoyant gravity currents propagating into laterally confined (channelized) and unconfined (spreading) environments. The plan-form structure of the spreading current and the vertical density and velocity structures on the interface are quantified using the optical thickness method and a combined particle image velocimetry and planar laser-induced fluorescence method, respectively. With lateral boundaries, the buoyant current thickness is approximately constant and Kelvin-Helmholtz instabilities are generated within the shear layer. The buoyant current structure is significantly different in the spreading case. As the current spreads laterally, nonlinear large-scale vortex structures are observed at the interface, which maintain a coherent shape as they propagate away from the source. These structures are continuously generated near the river mouth, have amplitudes close to the buoyant layer thickness, and propagate offshore at speeds approximately equal to the internal wave speed. The observed depth and propagation speed of the instabilities match well with the fastest growing mode predicted by linear stability analysis, but with a shorter wavelength. The spreading flows have much higher vorticity, which is aggregated within the large-scale structures. Secondary instabilities are generated on the leading edge of the braids between the large-scale vortex structures and ultimately break and mix on the lee side of the structures. Analysis of the vortex dynamics shows that lateral stretching intensifies the vorticity in the spreading currents, contributing to higher vorticity within the large-scale structures in the buoyant plume. The large-scale instabilities and vortex structures observed in the present study provide new insights into the origin of internal frontal structures frequently observed in coastal river plumes.
Traveling Weather Disturbances in Mars Southern Extratropics: Sway of the Great Impact Basins
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery L.
2016-01-01
As on Earth, between late autumn and early spring on Mars middle and high latitudes within its atmosphere support strong mean thermal contrasts between the equator and poles (i.e. "baroclinicity"). Data collected during the Viking era and observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports vigorous, large-scale eastward traveling weather systems (i.e. transient synoptic-period waves). Within a rapidly rotating, differentially heated, shallow atmosphere such as on Earth and Mars, such large-scale, extratropical weather disturbances are critical components of the global circulation. These wave-like disturbances act as agents in the transport of heat and momentum, and moreover generalized tracer quantities (e.g., atmospheric dust, water vapor and water-ice clouds) between low and high latitudes of the planet. The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a high-resolution Mars global climate model (Mars GCM). This global circulation model imposes interactively lifted (and radiatively active) dust based on a threshold value of the instantaneous surface stress. Compared to observations, the model exhibits a reasonable "dust cycle" (i.e. globally averaged, a more dusty atmosphere during southern spring and summer occurs). In contrast to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense synoptically. Influences of the zonally asymmetric (i.e. east-west varying) topography on southern large-scale weather disturbances are examined. Simulations that adapt Mars' full topography compared to simulations that utilize synthetic topographies emulating essential large-scale features of the southern middle latitudes indicate that Mars' transient barotropic/baroclinic eddies are significantly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). In addition, the occurrence of a southern storm zone in late winter and early spring is keyed particularly to the western hemisphere via orographic influences arising from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate fundamental differences amongst such simulations and these are described.
Traveling Weather Disturbances in Mars' Southern Extratropics: Sway of the Great Impact Basins
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.
2016-04-01
As on Earth, between late autumn and early spring on Mars middle and high latitudes within its atmosphere support strong mean thermal contrasts between the equator and poles (i.e., "baroclinicity"). Data collected during the Viking era and observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports vigorous, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Within a rapidly rotating, differentially heated, shallow atmosphere such as on Earth and Mars, such large-scale, extratropical weather disturbances are critical components of the global circulation. These wave-like disturbances act as agents in the transport of heat and momentum, and moreover generalized tracer quantities (e.g., atmospheric dust, water vapor and water-ice clouds) between low and high latitudes of the planet. The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a high-resolution Mars global climate model (Mars GCM). This global circulation model imposes interactively lifted (and radiatively active) dust based on a threshold value of the instantaneous surface stress. Compared to observations, the model exhibits a reasonable "dust cycle" (i.e., globally averaged, a more dusty atmosphere during southern spring and summer occurs). In contrast to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense synoptically. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather disturbances are examined. Simulations that adapt Mars' full topography compared to simulations that utilize synthetic topographies emulating essential large-scale features of the southern middle latitudes indicate that Mars' transient barotropic/baroclinic eddies are significantly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). In addition, the occurrence of a southern storm zone in late winter and early spring is keyed particularly to the western hemisphere via orographic influences arising from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate fundamental differences amongst such simulations and these are described.
Mahjouri, Najmeh; Ardestani, Mojtaba
2011-01-01
In this paper, two cooperative and non-cooperative methodologies are developed for a large-scale water allocation problem in Southern Iran. The water shares of the water users and their net benefits are determined using optimization models having economic objectives with respect to the physical and environmental constraints of the system. The results of the two methodologies are compared based on the total obtained economic benefit, and the role of cooperation in utilizing a shared water resource is demonstrated. In both cases, the water quality in rivers satisfies the standards. Comparing the results of the two mentioned approaches shows the importance of acting cooperatively to achieve maximum revenue in utilizing a surface water resource while the river water quantity and quality issues are addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marrinan, Thomas; Leigh, Jason; Renambot, Luc
Mixed presence collaboration involves remote collaboration between multiple collocated groups. This paper presents the design and results of a user study that focused on mixed presence collaboration using large-scale tiled display walls. The research was conducted in order to compare data synchronization schemes for multi-user visualization applications. Our study compared three techniques for sharing data between display spaces with varying constraints and affordances. The results provide empirical evidence that using data sharing techniques with continuous synchronization between the sites lead to improved collaboration for a search and analysis task between remotely located groups. We have also identified aspects of synchronizedmore » sessions that result in increased remote collaborator awareness and parallel task coordination. It is believed that this research will lead to better utilization of large-scale tiled display walls for distributed group work.« less
Comparison of concentric needle versus hooked-wire electrodes in the canine larynx.
Jaffe, D M; Solomon, N P; Robinson, R A; Hoffman, H T; Luschei, E S
1998-05-01
The use of a specific electrode type in laryngeal electromyography has not been standardized. Laryngeal electromyography is usually performed with hooked-wire electrodes or concentric needle electrodes. Hooked-wire electrodes have the advantage of allowing laryngeal movement with ease and comfort, whereas the concentric needle electrodes have benefits from a technical aspect and may be advanced, withdrawn, or redirected during attempts to appropriately place the electrode. This study examines whether hooked-wire electrodes permit more stable recordings than standard concentric needle electrodes at rest and after large-scale movements of the larynx and surrounding structures. A histologic comparison of tissue injury resulting from placement and removal of the two electrode types is also made by evaluation of the vocal folds. Electrodes were percutaneously placed into the thyroarytenoid muscles of 10 adult canines. Amplitude of electromyographic activity was measured and compared during vagal stimulation before and after large-scale laryngeal movements. Signal consistency over time was examined. Animals were killed and vocal fold injury was graded and compared histologically. Waveform morphology did not consistently differ between electrode types. The variability of electromyographic amplitude was greater for the hooked-wire electrode (p < 0.05), whereas the mean amplitude measures before and after large-scale laryngeal movements did not differ (p > 0.05). Inflammatory responses and hematoma formation were also similar. Waveform morphology of electromyographic signals registered from both electrode types show similar complex action potentials. There is no difference between the hooked-wire electrode and the concentric needle electrode in terms of electrode stability or vocal fold injury in the thyroarytenoid muscle after large-scale laryngeal movements.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Differentiating unipolar and bipolar depression by alterations in large-scale brain networks.
Goya-Maldonado, Roberto; Brodmann, Katja; Keil, Maria; Trost, Sarah; Dechent, Peter; Gruber, Oliver
2016-02-01
Misdiagnosing bipolar depression can lead to very deleterious consequences of mistreatment. Although depressive symptoms may be similarly expressed in unipolar and bipolar disorder, changes in specific brain networks could be very distinct, being therefore informative markers for the differential diagnosis. We aimed to characterize specific alterations in candidate large-scale networks (frontoparietal, cingulo-opercular, and default mode) in symptomatic unipolar and bipolar patients using resting state fMRI, a cognitively low demanding paradigm ideal to investigate patients. Networks were selected after independent component analysis, compared across 40 patients acutely depressed (20 unipolar, 20 bipolar), and 20 controls well-matched for age, gender, and education levels, and alterations were correlated to clinical parameters. Despite comparable symptoms, patient groups were robustly differentiated by large-scale network alterations. Differences were driven in bipolar patients by increased functional connectivity in the frontoparietal network, a central executive and externally-oriented network. Conversely, unipolar patients presented increased functional connectivity in the default mode network, an introspective and self-referential network, as much as reduced connectivity of the cingulo-opercular network to default mode regions, a network involved in detecting the need to switch between internally and externally oriented demands. These findings were mostly unaffected by current medication, comorbidity, and structural changes. Moreover, network alterations in unipolar patients were significantly correlated to the number of depressive episodes. Unipolar and bipolar groups displaying similar symptomatology could be clearly distinguished by characteristic changes in large-scale networks, encouraging further investigation of network fingerprints for clinical use. Hum Brain Mapp 37:808-818, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties
NASA Astrophysics Data System (ADS)
Li, Yongzhe; Vorobyov, Sergiy A.
2018-03-01
In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.
2013-01-01
The Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial is a large-scale research effort conducted by the National Cancer Institute. PLCO offers an example of coordinated research by both the extramural and intramural communities of the National Institutes of Health. The purpose of this article is to describe the PLCO research resource and how it is managed and to assess the productivity and the costs associated with this resource. Such an in-depth analysis of a single large-scale project can shed light on questions such as how large-scale projects should be managed, what metrics should be used to assess productivity, and how costs can be compared with productivity metrics. A comprehensive publication analysis identified 335 primary research publications resulting from research using PLCO data and biospecimens from 2000 to 2012. By the end of 2012, a total of 9679 citations (excluding self-citations) have resulted from this body of research publications, with an average of 29.7 citations per article, and an h index of 45, which is comparable with other large-scale studies, such as the Nurses’ Health Study. In terms of impact on public health, PLCO trial results have been used by the US Preventive Services Task Force in making recommendations concerning prostate and ovarian cancer screening. The overall cost of PLCO was $454 million over 20 years, adjusted to 2011 dollars, with approximately $37 million for the collection, processing, and storage of biospecimens, including blood samples, buccal cells, and pathology tissues. PMID:24115361
Allometry indicates giant eyes of giant squid are not exceptional.
Schmitz, Lars; Motani, Ryosuke; Oufiero, Christopher E; Martin, Christopher H; McGee, Matthew D; Gamarra, Ashlee R; Lee, Johanna J; Wainwright, Peter C
2013-02-18
The eyes of giant and colossal squid are among the largest eyes in the history of life. It was recently proposed that sperm whale predation is the main driver of eye size evolution in giant squid, on the basis of an optical model that suggested optimal performance in detecting large luminous visual targets such as whales in the deep sea. However, it is poorly understood how the eye size of giant and colossal squid compares to that of other aquatic organisms when scaling effects are considered. We performed a large-scale comparative study that included 87 squid species and 237 species of acanthomorph fish. While squid have larger eyes than most acanthomorphs, a comparison of relative eye size among squid suggests that giant and colossal squid do not have unusually large eyes. After revising constants used in a previous model we found that large eyes perform equally well in detecting point targets and large luminous targets in the deep sea. The eyes of giant and colossal squid do not appear exceptionally large when allometric effects are considered. It is probable that the giant eyes of giant squid result from a phylogenetically conserved developmental pattern manifested in very large animals. Whatever the cause of large eyes, they appear to have several advantages for vision in the reduced light of the deep mesopelagic zone.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC. Commission on Behavioral and Social Sciences and Education.
Since its inception in 1988, the Board on International Comparative Studies in Education (BICSE) has monitored U.S. participation in those cross national comparative studies in education that are funded by its sponsors, the National Science Foundation and the National Center for Education Statistics. This set of international study descriptions…
Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio
2017-10-24
High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.
Large-scale correlations in gas traced by Mg II absorbers around low-mass galaxies
NASA Astrophysics Data System (ADS)
Kauffmann, Guinevere
2018-03-01
The physical origin of the large-scale conformity in the colours and specific star formation rates of isolated low-mass central galaxies and their neighbours on scales in excess of 1 Mpc is still under debate. One possible scenario is that gas is heated over large scales by feedback from active galactic nuclei (AGNs), leading to coherent modulation of cooling and star formation between well-separated galaxies. In this Letter, the metal line absorption catalogue of Zhu & Ménard is used to probe gas out to large projected radii around a sample of a million galaxies with stellar masses ˜1010M⊙ and photometric redshifts in the range 0.4 < z < 0.8 selected from Sloan Digital Sky Survey imaging data. This galaxy sample covers an effective volume of 2.2 Gpc3. A statistically significant excess of Mg II absorbers is present around the red-low-mass galaxies compared to their blue counterparts out to projected radii of 10 Mpc. In addition, the equivalent width distribution function of Mg II absorbers around low-mass galaxies is shown to be strongly affected by the presence of a nearby (Rp < 2 Mpc) radio-loud AGNs out to projected radii of 5 Mpc.
Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances
Parker, V. Thomas
2015-01-01
Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.
1998-01-01
Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Wu, Di; Lau, K.- M.; Tao, Wei-Kuo
2016-01-01
Large-scale forcing and land-atmosphere interactions on precipitation are investigated with NASA-Unified WRF (NU-WRF) simulations during fast transitions of ENSO phases from spring to early summer of 2010 and 2011. The model is found to capture major precipitation episodes in the 3-month simulations without resorting to nudging. However, the mean intensity of the simulated precipitation is underestimated by 46% and 57% compared with the observations in dry and wet regions in the southwestern and south-central United States, respectively. Sensitivity studies show that large-scale atmospheric forcing plays a major role in producing regional precipitation. A methodology to account for moisture contributions to individual precipitation events, as well as total precipitation, is presented under the same moisture budget framework. The analysis shows that the relative contributions of local evaporation and large-scale moisture convergence depend on the dry/wet regions and are a function of temporal and spatial scales. While the ratio of local and large-scale moisture contributions vary with domain size and weather system, evaporation provides a major moisture source in the dry region and during light rain events, which leads to greater sensitivity to soil moisture in the dry region and during light rain events. The feedback of land surface processes to large-scale forcing is well simulated, as indicated by changes in atmospheric circulation and moisture convergence. Overall, the results reveal an asymmetrical response of precipitation events to soil moisture, with higher sensitivity under dry than wet conditions. Drier soil moisture tends to suppress further existing below-normal precipitation conditions via a positive soil moisture-land surface flux feedback that could worsen drought conditions in the southwestern United States.
Malucelli, Emil; Procopio, Alessandra; Fratini, Michela; Gianoncelli, Alessandra; Notargiacomo, Andrea; Merolle, Lucia; Sargenti, Azzurra; Castiglioni, Sara; Cappadone, Concettina; Farruggia, Giovanna; Lombardo, Marco; Lagomarsino, Stefano; Maier, Jeanette A; Iotti, Stefano
2018-01-01
The quantification of elemental concentration in cells is usually performed by analytical assays on large populations missing peculiar but important rare cells. The present article aims at comparing the elemental quantification in single cells and cell population in three different cell types using a new approach for single cells elemental analysis performed at sub-micrometer scale combining X-ray fluorescence microscopy and atomic force microscopy. The attention is focused on the light element Mg, exploiting the opportunity to compare the single cell quantification to the cell population analysis carried out by a highly Mg-selective fluorescent chemosensor. The results show that the single cell analysis reveals the same Mg differences found in large population of the different cell strains studied. However, in one of the cell strains, single cell analysis reveals two cells with an exceptionally high intracellular Mg content compared with the other cells of the same strain. The single cell analysis allows mapping Mg and other light elements in whole cells at sub-micrometer scale. A detailed intensity correlation analysis on the two cells with the highest Mg content reveals that Mg subcellular localization correlates with oxygen in a different fashion with respect the other sister cells of the same strain. Graphical abstract Single cells or large population analysis this is the question!
Spectral analysis of the Forel-Ule Ocean colour comparator scale
NASA Astrophysics Data System (ADS)
Wernand, M. R.; van der Woerd, H. J.
2010-04-01
François Alphonse Forel (1890) and Willi Ule (1892) composed a colour comparator scale, with tints varying from indigo-blue to coca-cola brown, to quantify the colour of natural waters, like seas, lakes and rivers. For each measurement, the observer compares the colour of the water above a submersed white disc (Secchi disc) with the hand-held scale of pre-defined colours. The scale can be well reproduced from a simple recipe for twenty-one coloured chemical solutions and because the ease of its use, the Forel-Ule (FU) scale has been applied globally and intensively by oceanographers and limnologists from the year 1890. Indeed, the archived FU data belong to the oldest oceanographic data sets and do contain information on the changes in geobiophysical properties of natural waters during the last century. In this article we describe the optical properties of the FU-scale and its ability to cover the colours of natural waters, as observed by the human eye. The recipe of the scale and its reproduction is described. The spectral transmission of the tubes, with belonging chromaticity coordinates, is presented. The FU scale, in all its simplicity, is found to be an adequate ocean colour comparator scale. The scale is well characterized, is stable and observations are reproducible. This supports the idea that the large historic data base of FU measurements is coherent and well calibrated. Moreover, the scale can be coupled to contemporary multi-spectral observations with hand-held and satellite-based spectrometers.
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Effects of multiple-scale driving on turbulence statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Hyunju; Cho, Jungyeon, E-mail: hyunju527@gmail.com, E-mail: jcho@cnu.ac.kr
2014-01-01
Turbulence is ubiquitous in astrophysical fluids such as the interstellar medium and the intracluster medium. In turbulence studies, it is customary to assume that fluid is driven on a single scale. However, in astrophysical fluids, there can be many different driving mechanisms that act on different scales. If there are multiple energy-injection scales, the process of energy cascade and turbulence dynamo will be different compared with the case of the single energy-injection scale. In this work, we perform three-dimensional incompressible/compressible magnetohydrodynamic turbulence simulations. We drive turbulence in Fourier space in two wavenumber ranges, 2≤k≤√12 (large scale) and 15 ≲ kmore » ≲ 26 (small scale). We inject different amount of energy in each range by changing the amplitude of forcing in the range. We present the time evolution of the kinetic and magnetic energy densities and discuss the turbulence dynamo in the presence of energy injections at two scales. We show how kinetic, magnetic, and density spectra are affected by the two-scale energy injections and we discuss the observational implications. In the case ε {sub L} < ε {sub S}, where ε {sub L} and ε {sub S} are energy-injection rates at the large and small scales, respectively, our results show that even a tiny amount of large-scale energy injection can significantly change the properties of turbulence. On the other hand, when ε {sub L} ≳ ε {sub S}, the small-scale driving does not influence the turbulence statistics much unless ε {sub L} ∼ ε {sub S}.« less
NASA Astrophysics Data System (ADS)
Martin, J.; Laughlin, M. M.; Olson, E.
2017-12-01
Canopy processes can be viewed at many scales and through many lenses. Fundamentally, we may wish to start by treating each canopy as a unique surface, an ecosystem unto itself. By doing so, we can may make some important observations that greatly influence our ability to scale canopies to landscape, regional and global scales. This work summarizes an ongoing endeavor to quantify various canopy level processes on individual old and large Eastern white pine trees (Pinus strobus). Our work shows that these canopies contain complex structures that vary with height and as the tree ages. This phenomenon complicates the allometric scaling of these large trees using standard methods, but detailed measurements from within the canopy provided a method to constrain scaling equations. We also quantified how these canopies change and respond to canopy disturbance, and documented disproportionate variation of growth compared to the lower stem as the trees develop. Additionally, the complex shape and surface area allow these canopies to act like ecosystems themselves; despite being relatively young and more commonplace when compared to the more notable canopies of the tropics and the Pacific Northwestern US. The white pines of these relatively simple, near boreal forests appear to house various species including many lichens. The lichen species can cover significant portions of the canopy surface area (which may be only 25 to 50 years old) and are a sizable source of potential nitrogen additions to the soils below, as well as a modulator to hydrologic cycles by holding significant amounts of precipitation. Lastly, the combined complex surface area and focused verticality offers important habitat to numerous animal species, some of which are quite surprising.
Channel optimization of high-intensity laser beams in millimeter-scale plasmas
Ceurvorst, L.; Savin, A.; Ratan, N.; ...
2018-04-20
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
Channel optimization of high-intensity laser beams in millimeter-scale plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ceurvorst, L.; Savin, A.; Ratan, N.
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less
Musical expertise is related to altered functional connectivity during audiovisual integration
Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo
2015-01-01
The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305
Comparing two ground-cover measurement methodologies for semiarid rangelands
USDA-ARS?s Scientific Manuscript database
The limited field-of-view (FOV) associated with single-resolution very-large scale aerial (VLSA) imagery requires users to balance FOV and resolution needs. This balance varies by the specific questions being asked of the data. Here, we tested a FOV-resolution question by comparing ground-cover meas...
Developing a Drosophila Model of Schwannomatosis
2012-08-01
the entire Drosophila melanogaster genome and compared...et al., 2009; Hanahan and Weinberg, 2011). Over the last decade, the fruit fly Drosophila melanogaster has become an important model system for cancer...studies. Reduced redundancy in the Drosophila genome compared with that of humans, coupled with the ability to conduct large-scale genetic screens
Flip This Classroom: A Comparative Study
ERIC Educational Resources Information Center
Unruh, Tiffany; Peters, Michelle L.; Willis, Jana
2016-01-01
The purpose of this research was to compare the beliefs and attitudes of teachers using the flipped versus the traditional class model. Survey and interview data were collected from a matched sample of in-service teachers representing both models from a large suburban southeastern Texas school district. The Attitude Towards Technology Scale, the…
Mutoh, Hiroki; Mishina, Yukiko; Gallero-Salas, Yasir; Knöpfel, Thomas
2015-01-01
Traditional small molecule voltage sensitive dye indicators have been a powerful tool for monitoring large scale dynamics of neuronal activities but have several limitations including the lack of cell class specific targeting, invasiveness and difficulties in conducting longitudinal studies. Recent advances in the development of genetically-encoded voltage indicators have successfully overcome these limitations. Genetically-encoded voltage indicators (GEVIs) provide sufficient sensitivity to map cortical representations of sensory information and spontaneous network activities across cortical areas and different brain states. In this study, we directly compared the performance of a prototypic GEVI, VSFP2.3, with that of a widely used small molecule voltage sensitive dye (VSD), RH1691, in terms of their ability to resolve mesoscopic scale cortical population responses. We used three synchronized CCD cameras to simultaneously record the dual emission ratiometric fluorescence signal from VSFP2.3 and RH1691 fluorescence. The results show that VSFP2.3 offers more stable and less invasive recording conditions, while the signal-to-noise level and the response dynamics to sensory inputs are comparable to RH1691 recordings. PMID:25964738
A Sensible Approach to Wireless Networking.
ERIC Educational Resources Information Center
Ahmed, S. Faruq
2002-01-01
Discusses radio frequency (R.F.) wireless technology, including industry standards, range (coverage) and throughput (data rate), wireless compared to wired networks, and considerations before embarking on a large-scale wireless project. (EV)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
De Vilmorin, Philippe; Slocum, Ashley; Jaber, Tareq; Schaefer, Oliver; Ruppach, Horst; Genest, Paul
2015-01-01
This article describes a four virus panel validation of EMD Millipore's (Bedford, MA) small virus-retentive filter, Viresolve® Pro, using TrueSpike(TM) viruses for a Biogen Idec process intermediate. The study was performed at Charles River Labs in King of Prussia, PA. Greater than 900 L/m(2) filter throughput was achieved with the approximately 8 g/L monoclonal antibody feed. No viruses were detected in any filtrate samples. All virus log reduction values were between ≥3.66 and ≥5.60. The use of TrueSpike(TM) at Charles River Labs allowed Biogen Idec to achieve a more representative scaled-down model and potentially reduce the cost of its virus filtration step and the overall cost of goods. The body of data presented here is an example of the benefits of following the guidance from the PDA Technical Report 47, The Preparation of Virus Spikes Used for Viral Clearance Studies. The safety of biopharmaceuticals is assured through the use of multiple steps in the purification process that are capable of virus clearance, including filtration with virus-retentive filters. The amount of virus present at the downstream stages in the process is expected to be and is typically low. The viral clearance capability of the filtration step is assessed in a validation study. The study utilizes a small version of the larger manufacturing size filter, and a large, known amount of virus is added to the feed prior to filtration. Viral assay before and after filtration allows the virus log reduction value to be quantified. The representativeness of the small-scale model is supported by comparing large-scale filter performance to small-scale filter performance. The large-scale and small-scale filtration runs are performed using the same operating conditions. If the filter performance at both scales is comparable, it supports the applicability of the virus log reduction value obtained with the small-scale filter to the large-scale manufacturing process. However, the virus preparation used to spike the feed material often contains impurities that contribute adversely to virus filter performance in the small-scale model. The added impurities from the virus spike, which are not present at manufacturing scale, compromise the scale-down model and put into question the direct applicability of the virus clearance results. Another consequence of decreased filter performance due to virus spike impurities is the unnecessary over-sizing of the manufacturing system to match the low filter capacity observed in the scale-down model. This article describes how improvements in mammalian virus spike purity ensure the validity of the log reduction value obtained with the scale-down model and support economically optimized filter usage. © PDA, Inc. 2015.
Large-scale deformation associated with ridge subduction
Geist, E.L.; Fisher, M.A.; Scholl, D. W.
1993-01-01
Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors
Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force
NASA Astrophysics Data System (ADS)
Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.
2016-01-01
The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.
Iris indexing based on local intensity order pattern
NASA Astrophysics Data System (ADS)
Emerich, Simina; Malutan, Raul; Crisan, Septimiu; Lefkovits, Laszlo
2017-03-01
In recent years, iris biometric systems have increased in popularity and have been proven that are capable of handling large-scale databases. The main advantage of these systems is accuracy and reliability. A proper iris patterns classification is expected to reduce the matching time in huge databases. This paper presents an iris indexing technique based on Local Intensity Order Pattern. The performance of the present approach is evaluated on UPOL database and is compared with other recent systems designed for iris indexing. The results illustrate the potential of the proposed method for large scale iris identification.
Environmental aspects of large-scale wind-power systems in the UK
NASA Astrophysics Data System (ADS)
Robson, A.
1984-11-01
Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the UK are discussed. Noise, television interference, hazards to bird life, and visual effects are considered. Areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first UK machines to be introduced in a safe and environementally acceptable manner. Research to establish siting criteria more clearly, and significantly increase the potential wind-energy resource is mentioned. Studies of the comparative risk of energy systems are shown to be overpessimistic for UK wind turbines.
NASA Astrophysics Data System (ADS)
Watts, Duncan; CLASS Collaboration
2018-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.
NASA Astrophysics Data System (ADS)
Lam, Simon K. H.
2017-09-01
A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
A spatial picture of the synthetic large-scale motion from dynamic roughness
NASA Astrophysics Data System (ADS)
Huynh, David; McKeon, Beverley
2017-11-01
Jacobi and McKeon (2011) set up a dynamic roughness apparatus to excite a synthetic, travelling wave-like disturbance in a wind tunnel, boundary layer study. In the present work, this dynamic roughness has been adapted for a flat-plate, turbulent boundary layer experiment in a water tunnel. A key advantage of operating in water as opposed to air is the longer flow timescales. This makes accessible higher non-dimensional actuation frequencies and correspondingly shorter synthetic length scales, and is thus more amenable to particle image velocimetry. As a result, this experiment provides a novel spatial picture of the synthetic mode, the coupled small scales, and their streamwise development. It is demonstrated that varying the roughness actuation frequency allows for significant tuning of the streamwise wavelength of the synthetic mode, with a range of 3 δ-13 δ being achieved. Employing a phase-locked decomposition, spatial snapshots are constructed of the synthetic large scale and used to analyze its streamwise behavior. Direct spatial filtering is used to separate the synthetic large scale and the related small scales, and the results are compared to those obtained by temporal filtering that invokes Taylor's hypothesis. The support of AFOSR (Grant # FA9550-16-1-0361) is gratefully acknowledged.
Multi-scale virtual view on the precessing jet SS433
NASA Astrophysics Data System (ADS)
Monceau-Baroux, R.; Porth, O.; Meliani, Z.; Keppens, R.
2014-07-01
Observations of SS433 infer how an X-ray binary gives rise to a corkscrew patterned relativistic jet. XRB SS433 is well known on a large range of scales for wich we realize 3D simulation and radio mappings. For our study we use relativistic hydrodynamic in special relativity using a relativistic effective polytropic index. We use parameters extracted from observations to impose thermodynamical conditions of the ISM and jet. We follow the kinetic and thermal energy content, of the various ISM and jet regions. Our simulation follows simultaneously the evolution of the population of electrons which are accelerated by the jet. The evolving spectrum of these electrons, together with an assumed equipartition between dynamic and magnetic pressure, gives input for estimating the radio emission from our simulation. Ray tracing according to a direction of sight then realizes radio mappings of our data. Single snapshots are realised to compare with VLA observation as in Roberts et al. 2008. A radio movie is realised to compare with the 41 days movie made with the VLBA instrument. Finaly a larger scale simulation explore the discrepancy of opening angle between 10 and 20 degree between the large scale observation of SS433 and its close in observation.
Bardoczi, Laszlo; Rhodes, Terry L.; Navarro, Alejandro Banon; ...
2017-03-03
We present the first localized measurements of long and intermediate wavelength turbulent density fluctuations (more » $$\\sim\\atop{n}$$) and long wavelength turbulent electron temperature fluctuations ($$\\sim\\atop{T}$$ e) modified by m/n = 2/1 Neoclassical Tearing Mode (NTM) islands (m and n are the poloidal and toroidal mode numbers, respectively). These long and intermediate wavelengths correspond to the expected Ion Temperature Gradient and Trapped Electron Mode scales, respectively. Two regimes have been observed when tracking $$\\sim\\atop{n}$$ during NTM evolution: (1) small islands are characterized by a steep T e radial profile and turbulence levels comparable to those of the background; (2) large islands have a flat T e profile and reduced turbulence level at the O-point. Radially outside the large island, the T e profile is steeper and the turbulence level increased compared to the no or small island case. Reduced turbulence at the O-point compared to the X-point leads to a 15% modulation of $$\\sim\\atop{n}$$ 2 across the island that is nearly in phase with the T e modulation. Qualitative comparisons to the GENE non-linear gyrokinetic code are promising with GENE replicating the observed scaling of turbulence modification with island size. Furthermore, these results are significant as they allow the validation of gyrokinetic simulations modeling the interaction of these multi-scale phenomena.« less
Sweeten, Sara E.; Ford, W. Mark
2016-01-01
Large-scale coal mining practices, particularly surface coal extraction and associated valley fills as well as residential wastewater discharge, are of ecological concern for aquatic systems in central Appalachia. Identifying and quantifying alterations to ecosystems along a gradient of spatial scales is a necessary first-step to aid in mitigation of negative consequences to aquatic biota. In central Appalachian headwater streams, apart from fish, salamanders are the most abundant vertebrate predator that provide a significant intermediate trophic role linking aquatic and terrestrial food webs. Stream salamander species are considered to be sensitive to aquatic stressors and environmental alterations, as past research has shown linkages among microhabitat parameters, large-scale land use such as urbanization and logging, and salamander abundances. However, there is little information examining these relationships between environmental conditions and salamander occupancy in the coalfields of central Appalachia. In the summer of 2013, 70 sites (sampled two to three times each) in the southwest Virginia coalfields were visited to collect salamanders and quantify stream and riparian microhabitat parameters. Using an information-theoretic framework, effects of microhabitat and large-scale land use on stream salamander occupancy were compared. The findings indicate that Desmognathus spp. occupancy rates are more correlated to microhabitat parameters such as canopy cover than to large-scale land uses. However, Eurycea spp. occupancy rates had a strong association with large-scale land uses, particularly recent mining and forest cover within the watershed. These findings suggest that protection of riparian habitats is an important consideration for maintaining aquatic systems in central Appalachia. If this is not possible, restoration riparian areas should follow guidelines using quick-growing tree species that are native to Appalachian riparian areas. These types of trees would rapidly establish a canopy cover, stabilize the soil, and impede invasive plant species which would, in turn, provide high-quality refuges for stream salamanders.
Tarescavage, Anthony M; Corey, David M; Gupton, Herbert M; Ben-Porath, Yossef S
2015-01-01
Minnesota Multiphasic Personality Inventory-2-Restructured Form scores for 145 male police officer candidates were compared with supervisor ratings of field performance and problem behaviors during their initial probationary period. Results indicated that the officers produced meaningfully lower and less variant substantive scale scores compared to the general population. After applying a statistical correction for range restriction, substantive scale scores from all domains assessed by the inventory demonstrated moderate to large correlations with performance criteria. The practical significance of these results was assessed with relative risk ratio analyses that examined the utility of specific cutoffs on scales demonstrating associations with performance criteria.
Wang, Yupeng; Ficklin, Stephen P; Wang, Xiyin; Feltus, F Alex; Paterson, Andrew H
2016-01-01
Different modes of gene duplication including whole-genome duplication (WGD), and tandem, proximal and dispersed duplications are widespread in angiosperm genomes. Small-scale, stochastic gene relocations and transposed gene duplications are widely accepted to be the primary mechanisms for the creation of dispersed duplicates. However, here we show that most surviving ancient dispersed duplicates in core eudicots originated from large-scale gene relocations within a narrow window of time following a genome triplication (γ) event that occurred in the stem lineage of core eudicots. We name these surviving ancient dispersed duplicates as relocated γ duplicates. In Arabidopsis thaliana, relocated γ, WGD and single-gene duplicates have distinct features with regard to gene functions, essentiality, and protein interactions. Relative to γ duplicates, relocated γ duplicates have higher non-synonymous substitution rates, but comparable levels of expression and regulation divergence. Thus, relocated γ duplicates should be distinguished from WGD and single-gene duplicates for evolutionary investigations. Our results suggest large-scale gene relocations following the γ event were associated with the diversification of core eudicots.
Wang, Yupeng; Ficklin, Stephen P.; Wang, Xiyin; Feltus, F. Alex; Paterson, Andrew H.
2016-01-01
Different modes of gene duplication including whole-genome duplication (WGD), and tandem, proximal and dispersed duplications are widespread in angiosperm genomes. Small-scale, stochastic gene relocations and transposed gene duplications are widely accepted to be the primary mechanisms for the creation of dispersed duplicates. However, here we show that most surviving ancient dispersed duplicates in core eudicots originated from large-scale gene relocations within a narrow window of time following a genome triplication (γ) event that occurred in the stem lineage of core eudicots. We name these surviving ancient dispersed duplicates as relocated γ duplicates. In Arabidopsis thaliana, relocated γ, WGD and single-gene duplicates have distinct features with regard to gene functions, essentiality, and protein interactions. Relative to γ duplicates, relocated γ duplicates have higher non-synonymous substitution rates, but comparable levels of expression and regulation divergence. Thus, relocated γ duplicates should be distinguished from WGD and single-gene duplicates for evolutionary investigations. Our results suggest large-scale gene relocations following the γ event were associated with the diversification of core eudicots. PMID:27195960
Driving terrestrial ecosystem models from space
NASA Technical Reports Server (NTRS)
Waring, R. H.
1993-01-01
Regional air pollution, land-use conversion, and projected climate change all affect ecosystem processes at large scales. Changes in vegetation cover and growth dynamics can impact the functioning of ecosystems, carbon fluxes, and climate. As a result, there is a need to assess and monitor vegetation structure and function comprehensively at regional to global scales. To provide a test of our present understanding of how ecosystems operate at large scales we can compare model predictions of CO2, O2, and methane exchange with the atmosphere against regional measurements of interannual variation in the atmospheric concentration of these gases. Recent advances in remote sensing of the Earth's surface are beginning to provide methods for estimating important ecosystem variables at large scales. Ecologists attempting to generalize across landscapes have made extensive use of models and remote sensing technology. The success of such ventures is dependent on merging insights and expertise from two distinct fields. Ecologists must provide the understanding of how well models emulate important biological variables and their interactions; experts in remote sensing must provide the biophysical interpretation of complex optical reflectance and radar backscatter data.
Scale growth of structures in the turbulent boundary layer with a rod-roughened wall
NASA Astrophysics Data System (ADS)
Lee, Jin; Kim, Jung Hoon; Lee, Jae Hwa
2016-01-01
Direct numerical simulation of a turbulent boundary layer over a rod-roughened wall is performed with a long streamwise domain to examine the streamwise-scale growth mechanism of streamwise velocity fluctuating structures in the presence of two-dimensional (2-D) surface roughness. An instantaneous analysis shows that there is a slightly larger population of long structures with a small helix angle (spanwise inclinations relative to streamwise) and a large spanwise width over the rough-wall compared to that over a smooth-wall. Further inspection of time-evolving instantaneous fields clearly exhibits that adjacent long structures combine to form a longer structure through a spanwise merging process over the rough-wall; moreover, spanwise merging for streamwise scale growth is expected to occur frequently over the rough-wall due to the large spanwise scales generated by the 2-D roughness. Finally, we examine the influence of a large width and a small helix angle of the structures over the rough-wall with regard to spatial two-point correlation. The results show that these factors can increase the streamwise coherence of the structures in a statistical sense.
Li, Zhijin; Vogelmann, Andrew M.; Feng, Sha; ...
2015-01-20
We produce fine-resolution, three-dimensional fields of meteorological and other variables for the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Southern Great Plains site. The Community Gridpoint Statistical Interpolation system is implemented in a multiscale data assimilation (MS-DA) framework that is used within the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. The MS-DA algorithm uses existing reanalysis products and constrains fine-scale atmospheric properties by assimilating high-resolution observations. A set of experiments show that the data assimilation analysis realistically reproduces the intensity, structure, and time evolution of clouds and precipitation associated with a mesoscale convective system.more » Evaluations also show that the large-scale forcing derived from the fine-resolution analysis has an overall accuracy comparable to the existing ARM operational product. For enhanced applications, the fine-resolution fields are used to characterize the contribution of subgrid variability to the large-scale forcing and to derive hydrometeor forcing, which are presented in companion papers.« less
Using real options to evaluate the flexibility in the deployment of SMR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Locatelli, G.; Mancini, M.; Ruiz, F.
2012-07-01
According to recent estimations the financial gap between Large Reactors (LR) and Small Medium Reactors (SMRs) seems not as huge as the economy of scale would suggest, so the SMRs are going to be important players of the worldwide nuclear renaissance. POLIMIs INCAS model has been developed to compare the investment in SMR with respect to LR. It provides the value of IRR (Internal Rate of Return), NPV (Net Present Value), LUEC (Levelized Unitary Electricity Cost), up-front investment, etc. The aim of this research is to integrate the actual INCAS model, based on discounted cash flows, with the real optionmore » theory to measure flexibility of the investor to expand, defer or abandon a nuclear project, under future uncertainties. The work compares the investment in a large nuclear power plant with a series of smaller, modular nuclear power plants on the same site. As a consequence it compares the benefits of the large power plant, coming from the economy of scale, to the benefit of the modular project (flexibility) concluding that managerial flexibility can be measured and used by an investor to face the investment risks. (authors)« less
Effects of high sound speed confiners on ANFO detonations
NASA Astrophysics Data System (ADS)
Kiyanda, Charles; Jackson, Scott; Short, Mark
2011-06-01
The interaction between high explosive (HE) detonations and high sound speed confiners, where the confiner sound speed exceeds the HE's detonation speed, has not been thoroughly studied. The subsonic nature of the flow in the confiner allows stress waves to travel ahead of the main detonation front and influence the upstream HE state. The interaction between the detonation wave and the confiner is also no longer a local interaction, so that the confiner thickness now plays a significant role in the detonation dynamics. We report here on larger scale experiments in which a mixture of ammonium nitrate and fuel oil (ANFO) is detonated in aluminium confiners with varying charge diameter and confiner thickness. The results of these large-scale experiments are compared with previous large-scale ANFO experiments in cardboard, as well as smaller-scale aluminium confined ANFO experiments, to characterize the effects of confiner thickness.
Gannotti, Mary E; Law, Mary; Bailes, Amy F; OʼNeil, Margaret E; Williams, Uzma; DiRezze, Briano
2016-01-01
A step toward advancing research about rehabilitation service associated with positive outcomes for children with cerebral palsy is consensus about a conceptual framework and measures. A Delphi process was used to establish consensus among clinicians and researchers in North America. Directors of large pediatric rehabilitation centers, clinicians from large hospitals, and researchers with expertise in outcomes participated (N = 18). Andersen's model of health care utilization framed outcomes: consumer satisfaction, activity, participation, quality of life, and pain. Measures agreed upon included Participation and Environment Measure for Children and Youth, Measure of Processes of Care, PEDI-CAT, KIDSCREEN-10, PROMIS Pediatric Pain Interference Scale, Visual Analog Scale for pain intensity, PROMIS Global Health Short Form, Family Environment Scale, Family Support Scale, and functional classification levels for gross motor, manual ability, and communication. Universal forms for documenting service use are needed. Findings inform clinicians and researchers concerned with outcome assessment.
Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar
2017-09-01
Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.
Haack-Sørensen, Mandana; Juhl, Morten; Follin, Bjarke; Harary Søndergaard, Rebekka; Kirchhoff, Maria; Kastrup, Jens; Ekblond, Annette
2018-04-17
In vitro expanded adipose-derived stromal cells (ASCs) are a useful resource for tissue regeneration. Translation of small-scale autologous cell production into a large-scale, allogeneic production process for clinical applications necessitates well-chosen raw materials and cell culture platform. We compare the use of clinical-grade human platelet lysate (hPL) and fetal bovine serum (FBS) as growth supplements for ASC expansion in the automated, closed hollow fibre quantum cell expansion system (bioreactor). Stromal vascular fractions were isolated from human subcutaneous abdominal fat. In average, 95 × 10 6 cells were suspended in 10% FBS or 5% hPL medium, and loaded into a bioreactor coated with cryoprecipitate. ASCs (P0) were harvested, and 30 × 10 6 ASCs were reloaded for continued expansion (P1). Feeding rate and time of harvest was guided by metabolic monitoring. Viability, sterility, purity, differentiation capacity, and genomic stability of ASCs P1 were determined. Cultivation of SVF in hPL medium for in average nine days, yielded 546 × 10 6 ASCs compared to 111 × 10 6 ASCs, after 17 days in FBS medium. ASCs P1 yields were in average 605 × 10 6 ASCs (PD [population doublings]: 4.65) after six days in hPL medium, compared to 119 × 10 6 ASCs (PD: 2.45) in FBS medium, after 21 days. ASCs fulfilled ISCT criteria and demonstrated genomic stability and sterility. The use of hPL as a growth supplement for ASCs expansion in the quantum cell expansion system provides an efficient expansion process compared to the use of FBS, while maintaining cell quality appropriate for clinical use. The described process is an obvious choice for manufacturing of large-scale allogeneic ASC products.
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Interactions between Antarctic sea ice and large-scale atmospheric modes in CMIP5 models
NASA Astrophysics Data System (ADS)
Schroeter, Serena; Hobbs, Will; Bindoff, Nathaniel L.
2017-03-01
The response of Antarctic sea ice to large-scale patterns of atmospheric variability varies according to sea ice sector and season. In this study, interannual atmosphere-sea ice interactions were explored using observations and reanalysis data, and compared with simulated interactions by models in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Simulated relationships between atmospheric variability and sea ice variability generally reproduced the observed relationships, though more closely during the season of sea ice advance than the season of sea ice retreat. Atmospheric influence on sea ice is known to be strongest during advance, and it appears that models are able to capture the dominance of the atmosphere during advance. Simulations of ocean-atmosphere-sea ice interactions during retreat, however, require further investigation. A large proportion of model ensemble members overestimated the relative importance of the Southern Annular Mode (SAM) compared with other modes of high southern latitude climate, while the influence of tropical forcing was underestimated. This result emerged particularly strongly during the season of sea ice retreat. The zonal patterns of the SAM in many models and its exaggerated influence on sea ice overwhelm the comparatively underestimated meridional influence, suggesting that simulated sea ice variability would become more zonally symmetric as a result. Across the seasons of sea ice advance and retreat, three of the five sectors did not reveal a strong relationship with a pattern of large-scale atmospheric variability in one or both seasons, indicating that sea ice in these sectors may be influenced more strongly by atmospheric variability unexplained by the major atmospheric modes, or by heat exchange in the ocean.
Analytic prediction of baryonic effects from the EFT of large scale structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu
2015-05-01
The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less
Scale and modeling issues in water resources planning
Lins, H.F.; Wolock, D.M.; McCabe, G.J.
1997-01-01
Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.
Polarization of the prompt gamma-ray emission from the gamma-ray burst of 6 December 2002.
Coburn, Wayne; Boggs, Steven E
2003-05-22
Observations of the afterglows of gamma-ray bursts (GRBs) have revealed that they lie at cosmological distances, and so correspond to the release of an enormous amount of energy. The nature of the central engine that powers these events and the prompt gamma-ray emission mechanism itself remain enigmatic because, once a relativistic fireball is created, the physics of the afterglow is insensitive to the nature of the progenitor. Here we report the discovery of linear polarization in the prompt gamma-ray emission from GRB021206, which indicates that it is synchrotron emission from relativistic electrons in a strong magnetic field. The polarization is at the theoretical maximum, which requires a uniform, large-scale magnetic field over the gamma-ray emission region. A large-scale magnetic field constrains possible progenitors to those either having or producing organized fields. We suggest that the large magnetic energy densities in the progenitor environment (comparable to the kinetic energy densities of the fireball), combined with the large-scale structure of the field, indicate that magnetic fields drive the GRB explosion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yang; Chen, Yuming, E-mail: yumingchen@fudan.edu.cn; Engineering Research Center of Advanced Lighting Technology, Ministry of Education, 220 Handan Road, Shanghai 00433
2016-03-14
Large scale graphene oxide (GO) is directly synthesized on copper (Cu) foil by plasma enhanced chemical vapor deposition method under 500 °C and even lower temperature. Compared to the modified Hummer's method, the obtained GO sheet in this article is large, and it is scalable according to the Cu foil size. The oxygen-contained groups in the GO are introduced through the residual gas of methane (99.9% purity). To prevent the Cu surface from the bombardment of the ions in the plasma, we use low intensity discharge. Our experiment reveals that growth temperature has important influence on the carbon to oxygen ratiomore » (C/O ratio) in the GO; and it also affects the amount of π-π* bonds between carbon atoms. Preliminary experiments on a 6 mm × 12 mm GO based humidity sensor prove that the synthesized GO reacts well to the humidity change. Our GO synthesis method may provide another channel for obtaining large scale GO in gas sensing or other applications.« less
Energy Spectral Behaviors of Communication Networks of Open-Source Communities
Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun
2015-01-01
Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331
Velpuri, Naga M.; Senay, Gabriel B.; Singh, Ramesh K.; Bohms, Stefanie; Verdin, James P.
2013-01-01
Remote sensing datasets are increasingly being used to provide spatially explicit large scale evapotranspiration (ET) estimates. Extensive evaluation of such large scale estimates is necessary before they can be used in various applications. In this study, two monthly MODIS 1 km ET products, MODIS global ET (MOD16) and Operational Simplified Surface Energy Balance (SSEBop) ET, are validated over the conterminous United States at both point and basin scales. Point scale validation was performed using eddy covariance FLUXNET ET (FLET) data (2001–2007) aggregated by year, land cover, elevation and climate zone. Basin scale validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various hydrologic unit code (HUC) levels. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products showed overall comparable annual accuracies. For most land cover types, both ET products showed comparable results. However, SSEBop showed higher performance for Grassland and Forest classes; MOD16 showed improved performance in the Woody Savanna class. Accuracy of both the ET products was also found to be comparable over different climate zones. However, SSEBop data showed higher skill score across the climate zones covering the western United States. Validation results at different HUC levels over 2000–2011 using GFET as a reference indicate higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000–2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at different HUC levels. Our results indicate that both MODIS ET products effectively reproduced basin scale ET response (up to 25% uncertainty) compared to CONUS-wide point-based ET response (up to 50–60% uncertainty) illustrating the reliability of MODIS ET products for basin-scale ET estimation. Results from this research would guide the additional parameter refinement required for the MOD16 and SSEBop algorithms in order to further improve their accuracy and performance for agro-hydrologic applications.
Characterization of laser-induced plasmas as a complement to high-explosive large-scale detonations
Kimblin, Clare; Trainham, Rusty; Capelle, Gene A.; ...
2017-09-12
Experimental investigations into the characteristics of laser-induced plasmas indicate that LIBS provides a relatively inexpensive and easily replicable laboratory technique to isolate and measure reactions germane to understanding aspects of high-explosive detonations under controlled conditions. Furthermore, we examine spectral signatures and derived physical parameters following laser ablation of aluminum, graphite and laser-sparked air as they relate to those observed following detonation of high explosives and as they relate to shocked air. Laser-induced breakdown spectroscopy (LIBS) reliably correlates reactions involving atomic Al and aluminum monoxide (AlO) with respect to both emission spectra and temperatures, as compared to small- and large-scale high-explosivemore » detonations. Atomic Al and AlO resulting from laser ablation and a cited small-scale study, decay within ~10 -5 s, roughly 100 times faster than the Al and AlO decay rates (~10 -3 s) observed following the large-scale detonation of an Al-encased explosive. Temperatures and species produced in laser-sparked air are compared to those produced with laser ablated graphite in air. With graphite present, CN is dominant relative to N 2 + . Thus, in studies where the height of the ablating laser's focus was altered relative to the surface of the graphite substrate, CN concentration was found to decrease with laser focus below the graphite surface, indicating that laser intensity is a critical factor in the production of CN, via reactive nitrogen.« less
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Negrete, Alejandro; Kotin, Robert M.
2007-01-01
The conventional methods for producing recombinant adeno-associated virus (rAAV) rely on transient transfection of adherent mammalian cells. To gain acceptance and achieve current good manufacturing process (cGMP) compliance, clinical grade rAAV production process should have the following qualities: simplicity, consistency, cost effectiveness, and scalability. Currently, the only viable method for producing rAAV in large-scale, e.g.≥1016 particles per production run, utilizes Baculovirus Expression Vectors (BEVs) and insect cells suspension cultures. The previously described rAAV production in 40 L culture using a stirred tank bioreactor requires special conditions for implementation and operation not available in all laboratories. Alternatives to producing rAAV in stirred-tank bioreactors are single-use, disposable bioreactors, e.g. Wave™. The disposable bags are purchased pre-sterilized thereby eliminating the need for end-user sterilization and also avoiding cleaning steps between production runs thus facilitating the production process. In this study, rAAV production in stirred tank and Wave™ bioreactors was compared. The working volumes were 10 L and 40 L for the stirred tank bioreactors and 5 L and 20 L for the Wave™ bioreactors. Comparable yields of rAAV, ~2e+13 particles per liter of cell culture were obtained in all volumes and configurations. These results demonstrate that producing rAAV in large scale using BEVs is reproducible, scalable, and independent of the bioreactor configuration. Keywords: adeno-associated vectors; large-scale production; stirred tank bioreactor; wave bioreactor; gene therapy. PMID:17606302
Characterization of laser-induced plasmas as a complement to high-explosive large-scale detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimblin, Clare; Trainham, Rusty; Capelle, Gene A.
Experimental investigations into the characteristics of laser-induced plasmas indicate that LIBS provides a relatively inexpensive and easily replicable laboratory technique to isolate and measure reactions germane to understanding aspects of high-explosive detonations under controlled conditions. Furthermore, we examine spectral signatures and derived physical parameters following laser ablation of aluminum, graphite and laser-sparked air as they relate to those observed following detonation of high explosives and as they relate to shocked air. Laser-induced breakdown spectroscopy (LIBS) reliably correlates reactions involving atomic Al and aluminum monoxide (AlO) with respect to both emission spectra and temperatures, as compared to small- and large-scale high-explosivemore » detonations. Atomic Al and AlO resulting from laser ablation and a cited small-scale study, decay within ~10 -5 s, roughly 100 times faster than the Al and AlO decay rates (~10 -3 s) observed following the large-scale detonation of an Al-encased explosive. Temperatures and species produced in laser-sparked air are compared to those produced with laser ablated graphite in air. With graphite present, CN is dominant relative to N 2 + . Thus, in studies where the height of the ablating laser's focus was altered relative to the surface of the graphite substrate, CN concentration was found to decrease with laser focus below the graphite surface, indicating that laser intensity is a critical factor in the production of CN, via reactive nitrogen.« less
Udsen, Flemming Witt; Lilholt, Pernille Heyckendorff; Hejlesen, Ole; Ehlers, Lars Holger
2014-05-21
Several feasibility studies show promising results of telehealthcare on health outcomes and health-related quality of life for patients suffering from chronic obstructive pulmonary disease, and some of these studies show that telehealthcare may even lower healthcare costs. However, the only large-scale trial we have so far - the Whole System Demonstrator Project in England - has raised doubts about these results since it conclude that telehealthcare as a supplement to usual care is not likely to be cost-effective compared with usual care alone. The present study is known as 'TeleCare North' in Denmark. It seeks to address these doubts by implementing a large-scale, pragmatic, cluster-randomized trial with nested economic evaluation. The purpose of the study is to assess the effectiveness and the cost-effectiveness of a telehealth solution for patients suffering from chronic obstructive pulmonary disease compared to usual practice. General practitioners will be responsible for recruiting eligible participants (1,200 participants are expected) for the trial in the geographical area of the North Denmark Region. Twenty-six municipality districts in the region define the randomization clusters. The primary outcomes are changes in health-related quality of life, and the incremental cost-effectiveness ratio measured from baseline to follow-up at 12 months. Secondary outcomes are changes in mortality and physiological indicators (diastolic and systolic blood pressure, pulse, oxygen saturation, and weight). There has been a call for large-scale clinical trials with rigorous cost-effectiveness assessments in telehealthcare research. This study is meant to improve the international evidence base for the effectiveness and cost-effectiveness of telehealthcare to patients suffering from chronic obstructive pulmonary disease by implementing a large-scale pragmatic cluster-randomized clinical trial. Clinicaltrials.gov, http://NCT01984840, November 14, 2013.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Planck data versus large scale structure: Methods to quantify discordance
NASA Astrophysics Data System (ADS)
Charnock, Tom; Battye, Richard A.; Moss, Adam
2017-06-01
Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .
Lerman, Caryn; Gu, Hong; Loughead, James; Ruparel, Kosha; Yang, Yihong; Stein, Elliot A
2014-05-01
Interactions of large-scale brain networks may underlie cognitive dysfunctions in psychiatric and addictive disorders. To test the hypothesis that the strength of coupling among 3 large-scale brain networks--salience, executive control, and default mode--will reflect the state of nicotine withdrawal (vs smoking satiety) and will predict abstinence-induced craving and cognitive deficits and to develop a resource allocation index (RAI) that reflects the combined strength of interactions among the 3 large-scale networks. A within-subject functional magnetic resonance imaging study in an academic medical center compared resting-state functional connectivity coherence strength after 24 hours of abstinence and after smoking satiety. We examined the relationship of abstinence-induced changes in the RAI with alterations in subjective, behavioral, and neural functions. We included 37 healthy smoking volunteers, aged 19 to 61 years, for analyses. Twenty-four hours of abstinence vs smoking satiety. Inter-network connectivity strength (primary) and the relationship with subjective, behavioral, and neural measures of nicotine withdrawal during abstinence vs smoking satiety states (secondary). The RAI was significantly lower in the abstinent compared with the smoking satiety states (left RAI, P = .002; right RAI, P = .04), suggesting weaker inhibition between the default mode and salience networks. Weaker inter-network connectivity (reduced RAI) predicted abstinence-induced cravings to smoke (r = -0.59; P = .007) and less suppression of default mode activity during performance of a subsequent working memory task (ventromedial prefrontal cortex, r = -0.66, P = .003; posterior cingulate cortex, r = -0.65, P = .001). Alterations in coupling of the salience and default mode networks and the inability to disengage from the default mode network may be critical in cognitive/affective alterations that underlie nicotine dependence.
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
Contribution of aboveground plant respiration to carbon cycling in a Bornean tropical rainforet
NASA Astrophysics Data System (ADS)
Katayama, Ayumi; Tanaka, Kenzo; Ichie, Tomoaki; Kume, Tomonori; Matsumoto, Kazuho; Ohashi, Mizue; Kumagai, Tomo'omi
2014-05-01
Bornean tropical rainforests have a different characteristic from Amazonian tropical rainforests, that is, larger aboveground biomass caused by higher stand density of large trees. Larger biomass may cause different carbon cycling and allocation pattern. However, there are fewer studies on carbon allocation and each component in Bornean tropical rainforests, especially for aboveground plant respiration, compared to Amazonian forests. In this study, we measured woody tissue respiration and leaf respiration, and estimated those in ecosystem scale in a Bornean tropical rainforest. Then, we examined carbon allocation using the data of soil respiration and aboveground net primary production obtained from our previous studies. Woody tissue respiration rate was positively correlated with diameter at breast height (dbh) and stem growth rate. Using the relationships and biomass data, we estimated woody tissue respiration in ecosystem scale though methods of scaling resulted in different estimates values (4.52 - 9.33 MgC ha-1 yr-1). Woody tissue respiration based on surface area (8.88 MgC ha-1 yr-1) was larger than those in Amazon because of large aboveground biomass (563.0 Mg ha-1). Leaf respiration rate was positively correlated with height. Using the relationship and leaf area density data at each 5-m height, leaf respiration in ecosystem scale was estimated (9.46 MgC ha-1 yr-1), which was similar to those in Amazon because of comparable LAI (5.8 m2 m-2). Gross primary production estimated from biometric measurements (44.81 MgC ha-1 yr-1) was much higher than those in Amazon, and more carbon was allocated to woody tissue respiration and total belowground carbon flux. Large tree with dbh > 60cm accounted for about half of aboveground biomass and aboveground biomass increment. Soil respiration was also related to position of large trees, resulting in high soil respiration rate in this study site. Photosynthesis ability of top canopy for large trees was high and leaves for the large trees accounted for 30% of total, which can lead high GPP. These results suggest that large trees play considerable role in carbon cycling and make a distinctive carbon allocation in the Bornean tropical rainforest.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
NASA Astrophysics Data System (ADS)
De, S.; Agarwal, N. K.; Hazra, Anupam; Chaudhari, Hemantkumar S.; Sahai, A. K.
2018-04-01
The interaction between cloud and large scale circulation is much less explored area in climate science. Unfolding the mechanism of coupling between these two parameters is imperative for improved simulation of Indian summer monsoon (ISM) and to reduce imprecision in climate sensitivity of global climate model. This work has made an effort to explore this mechanism with CFSv2 climate model experiments whose cloud has been modified by changing the critical relative humidity (CRH) profile of model during ISM. Study reveals that the variable CRH in CFSv2 has improved the nonlinear interactions between high and low frequency oscillations in wind field (revealed as internal dynamics of monsoon) and modulates realistically the spatial distribution of interactions over Indian landmass during the contrasting monsoon season compared to the existing CRH profile of CFSv2. The lower tropospheric wind error energy in the variable CRH simulation of CFSv2 appears to be minimum due to the reduced nonlinear convergence of error to the planetary scale range from long and synoptic scales (another facet of internal dynamics) compared to as observed from other CRH experiments in normal and deficient monsoons. Hence, the interplay between cloud and large scale circulation through CRH may be manifested as a change in internal dynamics of ISM revealed from scale interactive quasi-linear and nonlinear kinetic energy exchanges in frequency as well as in wavenumber domain during the monsoon period that eventually modify the internal variance of CFSv2 model. Conversely, the reduced wind bias and proper modulation of spatial distribution of scale interaction between the synoptic and low frequency oscillations improve the eastward and northward extent of water vapour flux over Indian landmass that in turn give feedback to the realistic simulation of cloud condensates attributing improved ISM rainfall in CFSv2.
The Use of Illustrations in Large-Scale Science Assessment: A Comparative Study
ERIC Educational Resources Information Center
Wang, Chao
2012-01-01
This dissertation addresses the complexity of test illustrations design across cultures. More specifically, it examines how the characteristics of illustrations used in science test items vary across content areas, assessment programs, and cultural origins. It compares a total of 416 Grade 8 illustrated items from the areas of earth science, life…
USDA-ARS?s Scientific Manuscript database
Quality and processing attributes of sweet sorghum (Sorghum bicolor L. Moench) biomass are critical to the development of a large-scale industry for the manufacture of bioproducts. Two commercial sweet sorghum hybrids 105 and 106, later and earlier maturing, respectively, were compared to inbred, l...
ERIC Educational Resources Information Center
Geller, Cornelia; Neumann, Knut; Boone, William J.; Fischer, Hans E.
2014-01-01
This manuscript details our efforts to assess and compare students' learning about electricity in three countries. As our world is increasingly driven by technological advancements, the education of future citizens in science becomes one important resource for economic productivity. Not surprisingly international large-scale assessments are viewed…
ERIC Educational Resources Information Center
Rutkowski, Leslie; Rutkowski, David
2018-01-01
Over time international large-scale assessments have grown in terms of number of studies, cycles, and participating countries, many of which are a heterogeneous mix of economies, languages, cultures, and geography. This heterogeneity has meaningful consequences for comparably measuring both achievement and non-achievement constructs, such as…
Regional climate model sensitivity to domain size
NASA Astrophysics Data System (ADS)
Leduc, Martin; Laprise, René
2009-05-01
Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
NASA Astrophysics Data System (ADS)
Velten, Andreas
2017-05-01
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
NASA Astrophysics Data System (ADS)
Kim, Byung-Ho; Hyuck Kim, Yoon; Lee, Young Jin; Lee, Mi Jai; Kim, Jin-Ho; Hwang, Jonghee; Jeon, Dae-Woo
2018-01-01
We have developed a facile single-step synthesis of silver nanocomposite using a conventional spray dryer. We investigated the synthetic conditions by controlling the concentrations of the chemical reactants. Further, we confirmed the effect of the molecular weight of polyvinylpyrrolidones, and revealed that the molecular weight significantly affected the properties of the resultant silver nanocomposites. The long-term stability of the silver nanocomposites was tested, and little change was observed, even after storage for three months. Most of all, the simple commercial implementation, in combination with large-scale synthesis, possesses a variety of advantages, compared to conventional complicated and costly dry-process synthesis methods. Thus, our method presents opportunities for further investigation, for both lab-scale studies and large-scale industrial applications.
Impact of spatially correlated pore-scale heterogeneity on drying porous media
NASA Astrophysics Data System (ADS)
Borgman, Oshri; Fantinel, Paolo; Lühder, Wieland; Goehring, Lucas; Holtzman, Ran
2017-07-01
We study the effect of spatially-correlated heterogeneity on isothermal drying of porous media. We combine a minimal pore-scale model with microfluidic experiments with the same pore geometry. Our simulated drying behavior compares favorably with experiments, considering the large sensitivity of the emergent behavior to the uncertainty associated with even small manufacturing errors. We show that increasing the correlation length in particle sizes promotes preferential drying of clusters of large pores, prolonging liquid connectivity and surface wetness and thus higher drying rates for longer periods. Our findings improve our quantitative understanding of how pore-scale heterogeneity impacts drying, which plays a role in a wide range of processes ranging from fuel cells to curing of paints and cements to global budgets of energy, water and solutes in soils.
NASA Astrophysics Data System (ADS)
Menge, B. A.; Gouhier, T.; Chan, F.; Hacker, S.; Menge, D.; Nielsen, K. J.
2016-02-01
Ecology focuses increasingly on the issue of matching spatial and temporal scales responsible for ecosystem pattern and dynamics. Benthic coastal communities traditionally were studied at local scales using mostly short-term research, while environmental (oceanographic, climatic) drivers were investigated at large scales (e.g., regional to oceanic, mostly offshore) using combined snapshot and monitoring (time series) research. The comparative-experimental approach combines local-scale studies at multiple sites spanning large-scale environmental gradients in combination with monitoring of inner shelf oceanographic conditions including upwelling/downwelling wind forcing and their consequences (e.g., temperature), and inputs of subsidies (larvae, phytoplankton, detritus). Temporal scale varies depending on the questions, but can extend from years to decades. We discuss two examples of rocky intertidal ecosystem dynamics, one at a regional scale (California Current System, CCS) and one at an interhemispheric scale. In the upwelling-dominated CCS, 52% and 32% of the variance in local community structure (functional group abundances at 13 sites across 725 km) was explained by external factors (ecological subsidies, oceanographic conditions, geographic location), and species interactions, respectively. The interhemispheric study tested the intermittent upwelling hypothesis (IUH), which predicts that key ecological processes will vary unimodally along a persistent downwelling to persistent upwelling gradient. Using 14-22 sites, unimodal relationships between ecological subsidies (phytoplankton, prey recruitment), prey responses (barnacle colonization, mussel growth) and species interactions (competition rate, predation rate and effect) and the Bakun upwelling index calculated at each site accounted for 50% of the variance. Hence, external factors can account for about half of locally-expressed community structure and dynamics.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A
2017-12-21
Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model
NASA Astrophysics Data System (ADS)
Baraka, Suleiman
2016-06-01
In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.
Safafar, Hamed; Hass, Michael Z.; Møller, Per; Holdt, Susan L.; Jacobsen, Charlotte
2016-01-01
Nannochloropsis salina was grown on a mixture of standard growth media and pre-gasified industrial process water representing effluent from a local biogas plant. The study aimed to investigate the effects of enriched growth media and cultivation time on nutritional composition of Nannochloropsis salina biomass, with a focus on eicosapentaenoic acid (EPA). Variations in fatty acid composition, lipids, protein, amino acids, tocopherols and pigments were studied and results compared to algae cultivated on F/2 media as reference. Mixed growth media and process water enhanced the nutritional quality of Nannochloropsis salina in laboratory scale when compared to algae cultivated in standard F/2 medium. Data from laboratory scale translated to the large scale using a 4000 L flat panel photo-bioreactor system. The algae growth rate in winter conditions in Denmark was slow, but results revealed that large-scale cultivation of Nannochloropsis salina at these conditions could improve the nutritional properties such as EPA, tocopherol, protein and carotenoids compared to laboratory-scale cultivated microalgae. EPA reached 44.2% ± 2.30% of total fatty acids, and α-tocopherol reached 431 ± 28 µg/g of biomass dry weight after 21 days of cultivation. Variations in chemical compositions of Nannochloropsis salina were studied during the course of cultivation. Nannochloropsis salina can be presented as a good candidate for winter time cultivation in Denmark. The resulting biomass is a rich source of EPA and also a good source of protein (amino acids), tocopherols and carotenoids for potential use in aquaculture feed industry. PMID:27483291
Safafar, Hamed; Hass, Michael Z; Møller, Per; Holdt, Susan L; Jacobsen, Charlotte
2016-07-29
Nannochloropsis salina was grown on a mixture of standard growth media and pre-gasified industrial process water representing effluent from a local biogas plant. The study aimed to investigate the effects of enriched growth media and cultivation time on nutritional composition of Nannochloropsis salina biomass, with a focus on eicosapentaenoic acid (EPA). Variations in fatty acid composition, lipids, protein, amino acids, tocopherols and pigments were studied and results compared to algae cultivated on F/2 media as reference. Mixed growth media and process water enhanced the nutritional quality of Nannochloropsis salina in laboratory scale when compared to algae cultivated in standard F/2 medium. Data from laboratory scale translated to the large scale using a 4000 L flat panel photo-bioreactor system. The algae growth rate in winter conditions in Denmark was slow, but results revealed that large-scale cultivation of Nannochloropsis salina at these conditions could improve the nutritional properties such as EPA, tocopherol, protein and carotenoids compared to laboratory-scale cultivated microalgae. EPA reached 44.2% ± 2.30% of total fatty acids, and α-tocopherol reached 431 ± 28 µg/g of biomass dry weight after 21 days of cultivation. Variations in chemical compositions of Nannochloropsis salina were studied during the course of cultivation. Nannochloropsis salina can be presented as a good candidate for winter time cultivation in Denmark. The resulting biomass is a rich source of EPA and also a good source of protein (amino acids), tocopherols and carotenoids for potential use in aquaculture feed industry.
Natalie A. Griffiths; Paul J. Hanson; Daniel M. Ricciuto; Colleen M. Iversen; Anna M. Jensen; Avni Malhotra; Karis J. McFarlane; Richard J. Norby; Khachik Sargsyan; Stephen D. Sebestyen; Xiaoying Shi; Anthony P. Walker; Eric J. Ward; Jeffrey M. Warren; David J. Weston
2017-01-01
We are conducting a large-scale, long-term climate change response experiment in an ombrotrophic peat bog in Minnesota to evaluate the effects of warming and elevated CO2 on ecosystem processes using empirical and modeling approaches. To better frame future assessments of peatland responses to climate change, we characterized and compared spatial...
ERIC Educational Resources Information Center
Patz, Richard J.; Junker, Brian W.; Johnson, Matthew S.; Mariano, Louis T.
2002-01-01
Discusses the hierarchical rater model (HRM) of R. Patz (1996) and shows how it can be used to scale examinees and items, model aspects of consensus among raters, and model individual rater severity and consistency effects. Also shows how the HRM fits into the generalizability theory framework. Compares the HRM to the conventional item response…
NASA Astrophysics Data System (ADS)
Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing
2018-04-01
Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.
Dahling, Daniel R
2002-01-01
Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.
The spatial and temporal domains of modern ecology.
Estes, Lyndon; Elsen, Paul R; Treuer, Timothy; Ahmed, Labeeb; Caylor, Kelly; Chang, Jason; Choi, Jonathan J; Ellis, Erle C
2018-05-01
To understand ecological phenomena, it is necessary to observe their behaviour across multiple spatial and temporal scales. Since this need was first highlighted in the 1980s, technology has opened previously inaccessible scales to observation. To help to determine whether there have been corresponding changes in the scales observed by modern ecologists, we analysed the resolution, extent, interval and duration of observations (excluding experiments) in 348 studies that have been published between 2004 and 2014. We found that observational scales were generally narrow, because ecologists still primarily use conventional field techniques. In the spatial domain, most observations had resolutions ≤1 m 2 and extents ≤10,000 ha. In the temporal domain, most observations were either unreplicated or infrequently repeated (>1 month interval) and ≤1 year in duration. Compared with studies conducted before 2004, observational durations and resolutions appear largely unchanged, but intervals have become finer and extents larger. We also found a large gulf between the scales at which phenomena are actually observed and the scales those observations ostensibly represent, raising concerns about observational comprehensiveness. Furthermore, most studies did not clearly report scale, suggesting that it remains a minor concern. Ecologists can better understand the scales represented by observations by incorporating autocorrelation measures, while journals can promote attentiveness to scale by implementing scale-reporting standards.
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Dwarshuis, Nate J; Parratt, Kirsten; Santiago-Miranda, Adriana; Roy, Krishnendu
2017-05-15
Therapeutic cells hold tremendous promise in treating currently incurable, chronic diseases since they perform multiple, integrated, complex functions in vivo compared to traditional small-molecule drugs or biologics. However, they also pose significant challenges as therapeutic products because (a) their complex mechanisms of actions are difficult to understand and (b) low-cost bioprocesses for large-scale, reproducible manufacturing of cells have yet to be developed. Immunotherapies using T cells and dendritic cells (DCs) have already shown great promise in treating several types of cancers, and human mesenchymal stromal cells (hMSCs) are now extensively being evaluated in clinical trials as immune-modulatory cells. Despite these exciting developments, the full potential of cell-based therapeutics cannot be realized unless new engineering technologies enable cost-effective, consistent manufacturing of high-quality therapeutic cells at large-scale. Here we review cell-based immunotherapy concepts focused on the state-of-the-art in manufacturing processes including cell sourcing, isolation, expansion, modification, quality control (QC), and culture media requirements. We also offer insights into how current technologies could be significantly improved and augmented by new technologies, and how disciplines must converge to meet the long-term needs for large-scale production of cell-based immunotherapies. Copyright © 2017 Elsevier B.V. All rights reserved.
Learning Short Binary Codes for Large-scale Image Retrieval.
Liu, Li; Yu, Mengyang; Shao, Ling
2017-03-01
Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.
Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Zhou, Ye
1999-01-01
Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.
ERIC Educational Resources Information Center
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael
2016-01-01
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
The temperature of large dust grains in molecular clouds
NASA Technical Reports Server (NTRS)
Clark, F. O.; Laureijs, R. J.; Prusti, T.
1991-01-01
The temperature of the large dust grains is calculated from three molecular clouds ranging in visual extinction from 2.5 to 8 mag, by comparing maps of either extinction derived from star counts or gas column density derived from molecular observations to I(100). Both techniques show the dust temperature declining into clouds. The two techniques do not agree in absolute scale.
Michael R. Saunders; Justin E. Arseneault
2013-01-01
In long-term, large-scale forest management studies, documentation of pre-treatment differences among and variability within experimental units is critical for drawing the proper inferences from imposed treatments. We compared pre-treatment overstory and large shrub communities (diameters at breast height >1.5 cm) for the 9 research cores with the Hardwood Ecosystem...
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
Improved actions and asymptotic scaling in lattice Yang-Mills theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langfeld, Kurt
2007-11-01
Improved actions in SU(2) and SU(3) lattice gauge theories are investigated with an emphasis on asymptotic scaling. A new scheme for tadpole improvement is proposed. The standard but heuristic tadpole improvement emerges from a mean field approximation from the new approach. Scaling is investigated by means of the large distance static quark potential. Both the generic and the new tadpole scheme yield significant improvements on asymptotic scaling when compared with loop improved actions. A study of the rotational symmetry breaking terms, however, reveals that only the new improvement scheme efficiently eliminates the leading irrelevant term from the action.
Thermal non-equilibrium effect of small-scale structures in compressible turbulence
NASA Astrophysics Data System (ADS)
Li, Shi-Yi; Li, Qi-Bing
2018-05-01
The thermal non-equilibrium effect of the small-scale structures in the canonical two-dimensional turbulence is studied. Comparative studies of Unified Gas Kinetic Scheme (UGKS) and GKS-Navier-Stokes (NS) for Taylor-Green flow with initial Ma = 1, Kn = 0.01 and decaying isotropic turbulence with initial Mat = 1, Reλ = 20 show that the discrepancy exists both in small and large scales, even beyond the dissipation range to 10η with accuracy to 8% in the SGS energy transfer of the decaying isotropic turbulence, illustrating the necessity for resolving the kinetic scales even at moderated Reλ = 20.
NASA Astrophysics Data System (ADS)
Kinsman, L.; Gerhard, J.; Torero, J.; Scholes, G.; Murray, C.
2013-12-01
Self-sustaining Treatment for Active Remediation (STAR) is a relatively new remediation approach for soil contaminated with organic industrial liquids. This technology uses smouldering combustion, a controlled, self-sustaining burning reaction, to destroy nonaqueous phase liquids (NAPLs) and thereby render soil clean. While STAR has been proven at the bench scale, success at industrial scales requires the process to be scaled-up significantly. The objective of this study was to conduct an experimental investigation into how liquid smouldering combustion phenomena scale. A suite of detailed forward smouldering experiments were conducted in short (16 cm dia. x 22 cm high), intermediate (16 cm dia. x 127 cm high), and large (97 cm dia. x 300 cm high; a prototype ex-situ reactor) columns; this represents scaling of up to 530 times based on the volume treated. A range of fuels were investigated, with the majority of experiments conducted using crude oil sludge as well as canola oil as a non-toxic surrogate for hazardous contaminants. To provide directly comparable data sets and to isolate changes in the smouldering reaction which occurred solely due to scaling effects, sand grain size, contaminant type, contaminant concentration and air injection rates were controlled between the experimental scales. Several processes could not be controlled and were identified to be susceptible to changes in scale, including: mobility of the contaminant, heat losses, and buoyant flow effects. For each experiment, the propagation of the smouldering front was recorded using thermocouples and analyzed by way of temperature-time and temperature-distance plots. In combination with the measurement of continuous mass loss and gaseous emissions, these results were used to evaluate the fundamental differences in the way the reaction front propagates through the mixture of sand and fuel across the various scales. Key governing parameters were compared between the small, intermediate, and large scale experiments, including: peak temperatures, velocities and thicknesses of the smouldering front, rates of mass destruction of the contaminant, and rates of gaseous emissions during combustion. Additionally, upward and downward smouldering experiments were compared at the column scale to assess the significance of buoyant flow effects. An understanding of these scaling relationships will provide important information to aid in the design of field-scale applications of STAR.
ERIC Educational Resources Information Center
Puhan, Gautam; Boughton, Keith A.; Kim, Sooyeon
2005-01-01
The study evaluated the comparability of two versions of a teacher certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). Standardized mean difference (SMD) and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that effect sizes…
NASA Astrophysics Data System (ADS)
Tang, G.; Bartlein, P. J.
2012-01-01
Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p < 0.01) in the Everglades of Florida over the years 1996-2001. The modeled monthly soil moisture for Illinois of the US agrees well (R2 = 0.79, p < 0.01) with the observed over the years 1984-2001. The modeled monthly stream flow for most 12 major rivers in the US is consistent R2 > 0.46, p < 0.01; Nash-Sutcliffe Coefficients >0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.
Oil Slick Observation at Low Incidence Angles in Ku-Band
NASA Astrophysics Data System (ADS)
Panfilova, M. A.; Karaev, V. Y.; Guo, Jie
2018-03-01
On the 20 April 2010 the oil platform Deep Water Horizon in the Gulf of Mexico suffered an explosion during the final phases of drilling an exploratory well. As a result, an oil film covered the sea surface area of several thousand square kilometers. In the present paper the data of the Ku-band Precipitation Radar, which operates at low incidence angles, were used to explore the oil spill event. The two-scale model of the scattering surface was used to describe radar backscatter from the sea surface. The algorithm for retrieval of normalized radar cross section at nadir and the total slope variance of large-scale waves compared to the wavelength of electromagnetic wave (22 mm) was developed for the Precipitation Radar swath. It is shown that measurements at low incidence angles can be used for oil spill detection. This is the first time that the dependence of mean square slope of large-scale waves on wind speed has been obtained for oil slicks from Ku-band data, and compared to mean square slope obtained by Cox and Munk from optical data.
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Ali, Naseem; Cal, Raúl
2016-11-01
Hot-wire anemometry measurements have been performed on a 3 x 3 wind turbine array to study the multifractality of the turbulent kinetic energy dissipations. A multifractal spectrum and Hurst exponents are determined at nine locations downstream of the hub height, and bottom and top tips. Higher multifractality is found at 0.5D and 1D downstream of the bottom tip and hub height. The second order of the Hurst exponent and combination factor show an ability to predict the flow state in terms of its development. Snapshot proper orthogonal decomposition is used to identify the coherent and incoherent structures and to reconstruct the stochastic velocity using a specific number of the POD eigenfunctions. The accumulation of the turbulent kinetic energy in top tip location exhibits fast convergence compared to the bottom tip and hub height locations. The dissipation of the large and small scales are determined using the reconstructed stochastic velocities. The higher multifractality is shown in the dissipation of the large scale compared to small-scale dissipation showing consistency with the behavior of the original signals.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
A Comparison of Obsessive-Compulsive Personality Disorder Scales
Samuel, Douglas B.; Widiger, Thomas A.
2010-01-01
The current study utilized a large undergraduate sample (n = 536), oversampled for DSM-IV-TR obsessive-compulsive personality disorder (OCPD) pathology, to compare eight self-report measures of OCPD. No prior study has compared more than three measures and the results indicated that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to two external reference points: Their relationships with a well established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison the results suggested important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, while some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder. PMID:20408023
NASA Astrophysics Data System (ADS)
Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng
2018-02-01
De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.
Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses
NASA Astrophysics Data System (ADS)
Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.
2014-12-01
Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Effects on aquatic and human health due to large scale bioenergy crop expansion.
Love, Bradley J; Einheuser, Matthew D; Nejadhashemi, A Pouyan
2011-08-01
In this study, the environmental impacts of large scale bioenergy crops were evaluated using the Soil and Water Assessment Tool (SWAT). Daily pesticide concentration data for a study area consisting of four large watersheds located in Michigan (totaling 53,358 km²) was estimated over a six year period (2000-2005). Model outputs for atrazine, bromoxynil, glyphosate, metolachlor, pendimethalin, sethoxydim, triflualin, and 2,4-D model output were used to predict the possible long-term implications that large-scale bioenergy crop expansion may have on the bluegill (Lepomis macrochirus) and humans. Threshold toxicity levels were obtained for the bluegill and for human consumption for all pesticides being evaluated through an extensive literature review. Model output was compared to each toxicity level for the suggested exposure time (96-hour for bluegill and 24-hour for humans). The results suggest that traditional intensive row crops such as canola, corn and sorghum may negatively impact aquatic life, and in most cases affect the safe drinking water availability. The continuous corn rotation, the most representative rotation for current agricultural practices for a starch-based ethanol economy, delivers the highest concentrations of glyphosate to the stream. In addition, continuous canola contributed to a concentration of 1.11 ppm of trifluralin, a highly toxic herbicide, which is 8.7 times the 96-hour ecotoxicity of bluegills and 21 times the safe drinking water level. Also during the period of study, continuous corn resulted in the impairment of 541,152 km of stream. However, there is promise with second-generation lignocellulosic bioenergy crops such as switchgrass, which resulted in a 171,667 km reduction in total stream length that exceeds the human threshold criteria, as compared to the base scenario. Results of this study may be useful in determining the suitability of bioenergy crop rotations and aid in decision making regarding the adaptation of large-scale bioenergy cropping systems. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Wang, Ke; Testi, Leonardo; Burkert, Andreas; Walmsley, C. Malcolm; Beuther, Henrik; Henning, Thomas
2016-09-01
Large-scale gaseous filaments with lengths up to the order of 100 pc are on the upper end of the filamentary hierarchy of the Galactic interstellar medium (ISM). Their association with respect to the Galactic structure and their role in Galactic star formation are of great interest from both an observational and theoretical point of view. Previous “by-eye” searches, combined together, have started to uncover the Galactic distribution of large filaments, yet inherent bias and small sample size limit conclusive statistical results from being drawn. Here, we present (1) a new, automated method for identifying large-scale velocity-coherent dense filaments, and (2) the first statistics and the Galactic distribution of these filaments. We use a customized minimum spanning tree algorithm to identify filaments by connecting voxels in the position-position-velocity space, using the Bolocam Galactic Plane Survey spectroscopic catalog. In the range of 7\\buildrel{\\circ}\\over{.} 5≤slant l≤slant 194^\\circ , we have identified 54 large-scale filaments and derived mass (˜ {10}3{--}{10}5 {M}⊙ ), length (10-276 pc), linear mass density (54-8625 {M}⊙ pc-1), aspect ratio, linearity, velocity gradient, temperature, fragmentation, Galactic location, and orientation angle. The filaments concentrate along major spiral arms. They are widely distributed across the Galactic disk, with 50% located within ±20 pc from the Galactic mid-plane and 27% run in the center of spiral arms. An order of 1% of the molecular ISM is confined in large filaments. Massive star formation is more favorable in large filaments compared to elsewhere. This is the first comprehensive catalog of large filaments that can be useful for a quantitative comparison with spiral structures and numerical simulations.
Cloud/climate sensitivity experiments
NASA Technical Reports Server (NTRS)
Roads, J. O.; Vallis, G. K.; Remer, L.
1982-01-01
A study of the relationships between large-scale cloud fields and large scale circulation patterns is presented. The basic tool is a multi-level numerical model comprising conservation equations for temperature, water vapor and cloud water and appropriate parameterizations for evaporation, condensation, precipitation and radiative feedbacks. Incorporating an equation for cloud water in a large-scale model is somewhat novel and allows the formation and advection of clouds to be treated explicitly. The model is run on a two-dimensional, vertical-horizontal grid with constant winds. It is shown that cloud cover increases with decreased eddy vertical velocity, decreased horizontal advection, decreased atmospheric temperature, increased surface temperature, and decreased precipitation efficiency. The cloud field is found to be well correlated with the relative humidity field except at the highest levels. When radiative feedbacks are incorporated and the temperature increased by increasing CO2 content, cloud amounts decrease at upper-levels or equivalently cloud top height falls. This reduces the temperature response, especially at upper levels, compared with an experiment in which cloud cover is fixed.
Do large-scale assessments measure students' ability to integrate scientific knowledge?
NASA Astrophysics Data System (ADS)
Lee, Hee-Sun
2010-03-01
Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.
He, Xueqin; Chen, Longjian; Han, Lujia; Liu, Ning; Cui, Ruxiu; Yin, Hongjie; Huang, Guangqun
2017-12-01
This study investigated the effects of biochar powder on oxygen supply efficiency and global warming potential (GWP) in the large-scale aerobic composting pattern which includes cyclical forced-turning with aeration at the bottom of composting tanks in China. A 55-day large-scale aerobic composting experiment was conducted in two different groups without and with 10% biochar powder addition (by weight). The results show that biochar powder improves the holding ability of oxygen, and the duration time (O 2 >5%) is around 80%. The composting process with above pattern significantly reduce CH 4 and N 2 O emissions compared to the static or turning-only styles. Considering the average GWP of the BC group was 19.82% lower than that of the CK group, it suggests that rational addition of biochar powder has the potential to reduce the energy consumption of turning, improve effectiveness of the oxygen supply, and reduce comprehensive greenhouse effects. Copyright © 2017. Published by Elsevier Ltd.
de Fabritus, Lauriane; Nougairède, Antoine; Aubry, Fabien; Gould, Ernest A; de Lamballerie, Xavier
2016-01-01
Large-scale codon re-encoding is a new method of attenuating RNA viruses. However, the use of infectious clones to generate attenuated viruses has inherent technical problems. We previously developed a bacterium-free reverse genetics protocol, designated ISA, and now combined it with large-scale random codon-re-encoding method to produce attenuated tick-borne encephalitis virus (TBEV), a pathogenic flavivirus which causes febrile illness and encephalitis in humans. We produced wild-type (WT) and two re-encoded TBEVs, containing 273 or 273+284 synonymous mutations in the NS5 and NS5+NS3 coding regions respectively. Both re-encoded viruses were attenuated when compared with WT virus using a laboratory mouse model and the relative level of attenuation increased with the degree of re-encoding. Moreover, all infected animals produced neutralizing antibodies. This novel, rapid and efficient approach to engineering attenuated viruses could potentially expedite the development of safe and effective new-generation live attenuated vaccines.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches
Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand
2018-01-01
Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
Fitting a Point Cloud to a 3d Polyhedral Surface
NASA Astrophysics Data System (ADS)
Popov, E. V.; Rotkov, S. I.
2017-05-01
The ability to measure parameters of large-scale objects in a contactless fashion has a tremendous potential in a number of industrial applications. However, this problem is usually associated with an ambiguous task to compare two data sets specified in two different co-ordinate systems. This paper deals with the study of fitting a set of unorganized points to a polyhedral surface. The developed approach uses Principal Component Analysis (PCA) and Stretched grid method (SGM) to substitute a non-linear problem solution with several linear steps. The squared distance (SD) is a general criterion to control the process of convergence of a set of points to a target surface. The described numerical experiment concerns the remote measurement of a large-scale aerial in the form of a frame with a parabolic shape. The experiment shows that the fitting process of a point cloud to a target surface converges in several linear steps. The method is applicable to the geometry remote measurement of large-scale objects in a contactless fashion.
SPIN ALIGNMENTS OF SPIRAL GALAXIES WITHIN THE LARGE-SCALE STRUCTURE FROM SDSS DR7
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Youcai; Yang, Xiaohu; Luo, Wentao
Using a sample of spiral galaxies selected from the Sloan Digital Sky Survey Data Release 7 and Galaxy Zoo 2, we investigate the alignment of spin axes of spiral galaxies with their surrounding large-scale structure, which is characterized by the large-scale tidal field reconstructed from the data using galaxy groups above a certain mass threshold. We find that the spin axes only have weak tendencies to be aligned with (or perpendicular to) the intermediate (or minor) axis of the local tidal tensor. The signal is the strongest in a cluster environment where all three eigenvalues of the local tidal tensor aremore » positive. Compared to the alignments between halo spins and the local tidal field obtained in N-body simulations, the above observational results are in best agreement with those for the spins of inner regions of halos, suggesting that the disk material traces the angular momentum of dark matter halos in the inner regions.« less
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Annear, Michael J; Eccleston, Claire E; McInerney, Frances J; Elliott, Kate-Ellen J; Toye, Christine M; Tranter, Bruce K; Robinson, Andrew L
2016-06-01
To compare the psychometric performance of the Dementia Knowledge Assessment Scale (DKAS) and the Alzheimer's Disease Knowledge Scale (ADKS) when administered to a large international cohort before and after online dementia education. Comparative psychometric analysis with pre- and posteducation scale responses. The setting for this research encompassed 7,909 individuals from 124 countries who completed the 9-week Understanding Dementia Massive Open Online Course (MOOC). Volunteer respondents who completed the DKAS and ADKS before (n = 3,649) and after (n = 878) completion of the Understanding Dementia MOOC. Assessment and comparison of the DKAS and ADKS included evaluation of scale development procedures, interscale correlations, response distribution, internal consistency, and construct validity. The DKAS had superior internal consistency, wider response distribution with less ceiling effect, and better discrimination between pre- and posteducation scores and occupational cohorts than the ADKS. The 27-item DKAS is a reliable and preliminarily valid measure of dementia knowledge that is psychometrically and conceptually sound, overcomes limitations of existing instruments, and can be administered to diverse cohorts to measure baseline understanding and knowledge change. © 2016, Copyright the Authors Journal compilation © 2016, The American Geriatrics Society.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
Distortion of the cosmic background radiation by superconducting strings
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Thompson, C.
1987-01-01
Superconducting cosmic strings can be significant energy sources, keeping the universe ionized past the commonly assumed epoch of recombination. As a result, the spectrum of the cosmic background radiation is distorted in the presence of heated primordial gas via the Suniaev-Zel'dovich effect. Thiis distortion can be relatively large: the Compton y parameter attains a maximum in the range 0.001-0.005, with these values depending on the mass scale of the string. A significant contribution to y comes from loops decaying at high redshift when the universe is optically thick to Thomson scattering. Moreover, the isotropic spectral distortion is large compared to fluctuations at all angular scales.
QCD-motivated description of very high energy particle interactions
NASA Technical Reports Server (NTRS)
Gaisser, T. K.; Halzen, F.
1985-01-01
Cross sections for the production of secondaries with large transverse momentum can become comparable to the total cross section in the TeV energy range. It is argued that the onset of this effect is observed at sub TeV energies via an increase of the rapidity distribution near y = 0, an increase of p sub T with energy and, most directly, via a correlation between p sub T and multiplicity. If indeed scaling violations are associated with the hard scattering of partons, then scaling violations are largely confined to the central region and have little effect on cosmic ray data which are sensitive to the forward fragmentation region.
Adcox, K; Adler, S S; Ajitanand, N N; Akiba, Y; Alexander, J; Aphecetche, L; Arai, Y; Aronson, S H; Averbeck, R; Awes, T C; Barish, K N; Barnes, P D; Barrette, J; Bassalleck, B; Bathe, S; Baublis, V; Bazilevsky, A; Belikov, S; Bellaiche, F G; Belyaev, S T; Bennett, M J; Berdnikov, Y; Botelho, S; Brooks, M L; Brown, D S; Bruner, N; Bucher, D; Buesching, H; Bumazhnov, V; Bunce, G; Burward-Hoy, J; Butsyk, S; Carey, T A; Chand, P; Chang, J; Chang, W C; Chavez, L L; Chernichenko, S; Chi, C Y; Chiba, J; Chiu, M; Choudhury, R K; Christ, T; Chujo, T; Chung, M S; Chung, P; Cianciolo, V; Cole, B A; D'Enterria, D G; David, G; Delagrange, H; Denisov, A; Deshpande, A; Desmond, E J; Dietzsch, O; Dinesh, B V; Drees, A; Durum, A; Dutta, D; Ebisu, K; Efremenko, Y V; El Chenawi, K; En'yo, H; Esumi, S; Ewell, L; Ferdousi, T; Fields, D E; Fokin, S L; Fraenkel, Z; Franz, A; Frawley, A D; Fung, S-Y; Garpman, S; Ghosh, T K; Glenn, A; Godoi, A L; Goto, Y; Greene, S V; Grosse Perdekamp, M; Gupta, S K; Guryn, W; Gustafsson, H-A; Haggerty, J S; Hamagaki, H; Hansen, A G; Hara, H; Hartouni, E P; Hayano, R; Hayashi, N; He, X; Hemmick, T K; Heuser, J M; Hibino, M; Hill, J C; Ho, D S; Homma, K; Hong, B; Hoover, A; Ichihara, T; Imai, K; Ippolitov, M S; Ishihara, M; Jacak, B V; Jang, W Y; Jia, J; Johnson, B M; Johnson, S C; Joo, K S; Kametani, S; Kang, J H; Kann, M; Kapoor, S S; Kelly, S; Khachaturov, B; Khanzadeev, A; Kikuchi, J; Kim, D J; Kim, H J; Kim, S Y; Kim, Y G; Kinnison, W W; Kistenev, E; Kiyomichi, A; Klein-Boesing, C; Klinksiek, S; Kochenda, L; Kochetkov, V; Koehler, D; Kohama, T; Kotchetkov, D; Kozlov, A; Kroon, P J; Kurita, K; Kweon, M J; Kwon, Y; Kyle, G S; Lacey, R; Lajoie, J G; Lauret, J; Lebedev, A; Lee, D M; Leitch, M J; Li, X H; Li, Z; Lim, D J; Liu, M X; Liu, X; Liu, Z; Maguire, C F; Mahon, J; Makdisi, Y I; Manko, V I; Mao, Y; Mark, S K; Markacs, S; Martinez, G; Marx, M D; Masaike, A; Matathias, F; Matsumoto, T; McGaughey, P L; Melnikov, E; Merschmeyer, M; Messer, F; Messer, M; Miake, Y; Miller, T E; Milov, A; Mioduszewski, S; Mischke, R E; Mishra, G C; Mitchell, J T; Mohanty, A K; Morrison, D P; Moss, J M; Mühlbacher, F; Muniruzzaman, M; Murata, J; Nagamiya, S; Nagasaka, Y; Nagle, J L; Nakada, Y; Nandi, B K; Newby, J; Nikkinen, L; Nilsson, P; Nishimura, S; Nyanin, A S; Nystrand, J; O'Brien, E; Ogilvie, C A; Ohnishi, H; Ojha, I D; Ono, M; Onuchin, V; Oskarsson, A; Osterman, L; Otterlund, I; Oyama, K; Paffrath, L; Palounek, A P T; Pantuev, V S; Papavassiliou, V; Pate, S F; Peitzmann, T; Petridis, A N; Pinkenburg, C; Pisani, R P; Pitukhin, P; Plasil, F; Pollack, M; Pope, K; Purschke, M L; Ravinovich, I; Read, K F; Reygers, K; Riabov, V; Riabov, Y; Rosati, M; Rose, A A; Ryu, S S; Saito, N; Sakaguchi, A; Sakaguchi, T; Sako, H; Sakuma, T; Samsonov, V; Sangster, T C; Santo, R; Sato, H D; Sato, S; Sawada, S; Schlei, B R; Schutz, Y; Semenov, V; Seto, R; Shea, T K; Shein, I; Shibata, T-A; Shigaki, K; Shiina, T; Shin, Y H; Sibiriak, I G; Silvermyr, D; Sim, K S; Simon-Gillo, J; Singh, C P; Singh, V; Sivertz, M; Soldatov, A; Soltz, R A; Sorensen, S; Stankus, P W; Starinsky, N; Steinberg, P; Stenlund, E; Ster, A; Stoll, S P; Sugioka, M; Sugitate, T; Sullivan, J P; Sumi, Y; Sun, Z; Suzuki, M; Takagui, E M; Taketani, A; Tamai, M; Tanaka, K H; Tanaka, Y; Taniguchi, E; Tannenbaum, M J; Thomas, J; Thomas, J H; Thomas, T L; Tian, W; Tojo, J; Torii, H; Towell, R S; Tserruya, I; Tsuruoka, H; Tsvetkov, A A; Tuli, S K; Tydesjö, H; Tyurin, N; Ushiroda, T; van Hecke, H W; Velissaris, C; Velkovska, J; Velkovsky, M; Vinogradov, A A; Volkov, M A; Vorobyov, A; Vznuzdaev, E; Wang, H; Watanabe, Y; White, S N; Witzig, C; Wohn, F K; Woody, C L; Xie, W; Yagi, K; Yokkaichi, S; Young, G R; Yushmanov, I E; Zajc, W A; Zhang, Z; Zhou, S
2002-01-14
Transverse momentum spectra for charged hadrons and for neutral pions in the range 1 GeV/c
Bottiglione, F; Carbone, G
2015-01-14
The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.
Spectral sum rules for confining large-N theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherman, Aleksey; McGady, David A.; Yamazaki, Masahito
2016-06-17
We consider asymptotically-free four-dimensional large-$N$ gauge theories with massive fermionic and bosonic adjoint matter fields, compactified on squashed three-spheres, and examine their regularized large-$N$ confined-phase spectral sums. The analysis is done in the limit of vanishing ’t Hooft coupling, which is justified by taking the size of the compactification manifold to be small compared to the inverse strong scale Λ ₋1. We find our results motivate us to conjecture some universal spectral sum rules for these large $N$ gauge theories.
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
Clumpy filaments of the Chamaeleon I cloud: C18O mapping with the SEST
NASA Astrophysics Data System (ADS)
Haikala, L. K.; Harju, J.; Mattila, K.; Toriseva, M.
2005-02-01
The Chamaeleon I dark cloud (Cha I) has been mapped in C18O with an angular resolution of 1 arcmin using the SEST telescope. The large scale structures previously observed with lower spatial resolution in the cloud turn into a network of clumpy filaments. The automatic Clumpfind routine developed by \\cite{williams1994} is used to identify individual clumps in a consistent way. Altogether 71 clumps were found and the total mass of these clumps is 230 M⊙. The dense ``cores'' detected with the NANTEN telescope (\\cite{mizuno1999}) and the very cold cores detected in the ISOPHOT serendipity survey (\\cite{toth2000}) form parts of these filaments but decompose into numerous ``clumps''. The filaments are preferentially oriented at right angles to the large-scale magnetic field in the region. We discuss the cloud structure, the physical characteristics of the clumps and the distribution of young stars. The observed clump mass spectrum is compared with the predictions of the turbulent fragmentation model of \\cite{padoan2002}. Agreement is found if fragmentation has been driven by very large-scale hypersonic turbulence, and if by now it has had time to dissipate into modestly supersonic turbulence in the interclump gas. According to numerical simulations, large-scale turbulence should have resulted in filamentary structures as seen in Cha I. The well-oriented magnetic field does not, however, support this picture, but suggests magnetically steered large-scale collapse. The origin of filaments and clumps in Cha I is thus controversial. A possible solution is that the characterization of the driving turbulence fails and that in fact different processes have been effective on small and large scales in this cloud. Based on observations collected at the European Southern Observatory, La Silla, Chile. FITS files are only available in electronic form at http://www.edpsciences.org
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
Emissions of nitrous oxide from biomass burning
NASA Technical Reports Server (NTRS)
Winstead, Edward L.; Cofer, Wesley R., III; Levine, Joel S.
1991-01-01
A study has been conducted which compared N2O results obtained over large prescribed fires or wildfires, in which 'grab-sampling' with storage had been used with N2O measurements made in near-real time. CO2-normalized emission ratios obtained initially from the laboratory fires are substantially lower than those obtained over large-scale biomass fires. Combustion may not be the only source of N2O in large fire smoke plumes; physical, chemical, and biochemical processes in the soil may be altered by large biomass fires, leading to large N2O releases.
Towards precision constraints on gravity with the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Bose, Benjamin; Koyama, Kazuya; Lewandowski, Matthew; Vernizzi, Filippo; Winther, Hans A.
2018-04-01
We compare analytical computations with numerical simulations for dark-matter clustering, in general relativity and in the normal branch of DGP gravity (nDGP). Our analytical frameword is the Effective Field Theory of Large-Scale Structure (EFTofLSS), which we use to compute the one-loop dark-matter power spectrum, including the resummation of infrared bulk displacement effects. We compare this to a set of 20 COLA simulations at redshifts z = 0, z = 0.5, and z = 1, and fit the free parameter of the EFTofLSS, called the speed of sound, in both ΛCDM and nDGP at each redshift. At one-loop at z = 0, the reach of the EFTofLSS is kreach ≈ 0.14 Mpc‑1 for both ΛCDM and nDGP. Along the way, we compare two different infrared resummation schemes and two different treatments of the time dependence of the perturbative expansion, concluding that they agree to approximately 1% over the scales of interest. Finally, we use the ratio of the COLA power spectra to make a precision measurement of the difference between the speeds of sound in ΛCDM and nDGP, and verify that this is proportional to the modification of the linear coupling constant of the Poisson equation.
DEMNUni: massive neutrinos and the bispectrum of large scale structures
NASA Astrophysics Data System (ADS)
Ruggeri, Rossana; Castorina, Emanuele; Carbone, Carmelita; Sefusatti, Emiliano
2018-03-01
The main effect of massive neutrinos on the large-scale structure consists in a few percent suppression of matter perturbations on all scales below their free-streaming scale. Such effect is of particular importance as it allows to constraint the value of the sum of neutrino masses from measurements of the galaxy power spectrum. In this work, we present the first measurements of the next higher-order correlation function, the bispectrum, from N-body simulations that include massive neutrinos as particles. This is the simplest statistics characterising the non-Gaussian properties of the matter and dark matter halos distributions. We investigate, in the first place, the suppression due to massive neutrinos on the matter bispectrum, comparing our measurements with the simplest perturbation theory predictions, finding the approximation of neutrinos contributing at quadratic order in perturbation theory to provide a good fit to the measurements in the simulations. On the other hand, as expected, a linear approximation for neutrino perturbations would lead to Script O(fν) errors on the total matter bispectrum at large scales. We then attempt an extension of previous results on the universality of linear halo bias in neutrino cosmologies, to non-linear and non-local corrections finding consistent results with the power spectrum analysis.
The impact of stellar feedback on the density and velocity structure of the interstellar medium
NASA Astrophysics Data System (ADS)
Grisdale, Kearn; Agertz, Oscar; Romeo, Alessandro B.; Renaud, Florent; Read, Justin I.
2017-04-01
We study the impact of stellar feedback in shaping the density and velocity structure of neutral hydrogen (H I) in disc galaxies. For our analysis, we carry out ˜4.6 pc resolution N-body+adaptive mesh refinement hydrodynamic simulations of isolated galaxies, set up to mimic a Milky Way and a Large and Small Magellanic Cloud. We quantify the density and velocity structure of the interstellar medium using power spectra and compare the simulated galaxies to observed H I in local spiral galaxies from THINGS (The H I Nearby Galaxy Survey). Our models with stellar feedback give an excellent match to the observed THINGS H I density power spectra. We find that kinetic energy power spectra in feedback-regulated galaxies, regardless of galaxy mass and size, show scalings in excellent agreement with supersonic turbulence (E(k) ∝ k-2) on scales below the thickness of the H I layer. We show that feedback influences the gas density field, and drives gas turbulence, up to large (kpc) scales. This is in stark contrast to density fields generated by large-scale gravity-only driven turbulence. We conclude that the neutral gas content of galaxies carries signatures of stellar feedback on all scales.
Vibrational Spectroscopic Studies of Reduced-Sensitivity RDX under Static Compression
NASA Astrophysics Data System (ADS)
Wong, Chak P.; Gump, Jared C.
2006-07-01
Explosive formulations with reduced-sensitivity RDX showed reduced shock sensitivity using Naval Ordnance Laboratory (NOL) Large Scale Gap Test, compared with similar formulations using standard RDX. Molecular processes responsible for the reduction of sensitivity are unknown and are crucial for formulation development. Vibrational spectroscopy at static high pressure may shed light on the mechanisms responsible for the reduced shock sensitivity as shown by the NOL Large Scale Gap Test. I-RDX®, a form of reduced- sensitivity RDX was subjected to static compression at ambient temperature in a Merrill-Bassett sapphire cell from ambient to about 6 GPa. The spectroscopic techniques used were Raman and Fourier-Transform IR (FTIR). The pressure dependence of the Raman mode frequencies of I-RDX® was determined and compared with that of standard RDX. The behavior of I-RDX® near the pressure at which standard RDX, at ambient temperature, undergoes a phase transition from the α to the γ polymorph is presented.
Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.; ...
2016-08-30
Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yagnik, Gargey B.; Hansen, Rebecca L.; Korte, Andrew R.
Nanoparticles (NPs) have been suggested as efficient matrixes for small molecule profiling and imaging by laser-desorption ionization mass spectrometry (LDI-MS), but so far there has been no systematic study comparing different NPs in the analysis of various classes of small molecules. Here, we present a large scale screening of 13 NPs for the analysis of two dozen small metabolite molecules. Many NPs showed much higher LDI efficiency than organic matrixes in positive mode and some NPs showed comparable efficiencies for selected analytes in negative mode. Our results suggest that a thermally driven desorption process is a key factor for metalmore » oxide NPs, but chemical interactions are also very important, especially for other NPs. Furthermore, the screening results provide a useful guideline for the selection of NPs in the LDI-MS analysis of small molecules.« less
How does the foraging behavior of large herbivores cause different associational plant defenses?
Huang, Yue; Wang, Ling; Wang, Deli; Zeng, De-Hui; Liu, Chen
2016-01-01
The attractant-decoy hypothesis predicts that focal plants can defend against herbivory by neighboring with preferred plant species when herbivores make decisions at the plant species scale. The repellent-plant hypothesis assumes that focal plants will gain protection by associating with nonpreferred neighbors when herbivores are selective at the patch scale. However, herbivores usually make foraging decisions at these scales simultaneously. The net outcomes of the focal plant vulnerability could depend on the spatial scale at which the magnitude of selectivity by the herbivores is stronger. We quantified and compared the within- and between-patch overall selectivity index (OSI) of sheep to examine the relationships between associational plant effects and herbivore foraging selectivity. We found that the sheep OSI was stronger at the within- than the between-patch scale, but focal plant vulnerability followed both hypotheses. Focal plants defended herbivory with preferred neighbors when the OSI difference between the two scales was large. Focal plants gained protection with nonpreferred neighbors when the OSI difference was narrowed. Therefore, the difference in selectivity by the herbivores between the relevant scales results in different associational plant defenses. Our study suggests important implications for understanding plant-herbivore interactions and grassland management. PMID:26847834
Comparative Approaches to Genetic Discrimination: Chasing Shadows?
Joly, Yann; Feze, Ida Ngueng; Song, Lingqiao; Knoppers, Bartha M
2017-05-01
Genetic discrimination (GD) is one of the most pervasive issues associated with genetic research and its large-scale implementation. An increasing number of countries have adopted public policies to address this issue. Our research presents a worldwide comparative review and typology of these approaches. We conclude with suggestions for public policy development. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Chou, Yueh-Ching; Lin, Li-Chan; Pu, Cheng-Yun; Lee, Wan-Ping; Chang, Shu-Chuan
2008-01-01
Background: The disability policy in Taiwan has traditionally emphasized residential care in large institutions and, more recently, medium-sized group homes. This paper compares the relative costs, services provided and outcomes between the traditional institutions, medium-sized group homes and new small-scale community living units that were…
ERIC Educational Resources Information Center
Pinskaya, M. A.; Lenskaya, E. A.; Ponomareva, A. A.; Brun, I. V.; Kosaretsky, S. G.; Savelyeva, M. B.
2016-01-01
The Teaching and Learning International Survey (TALIS) is a large-scale and authoritative international study of teachers. It is conducted by the Organization for Economic Cooperation and Development (OECD) to collect and compare information about teachers and principals in different countries in such key areas as the training and professional…
ERIC Educational Resources Information Center
Yanagida, Takuya; Gradinger, Petra; Strohmeier, Dagmar; Solomontos-Kountouri, Olga; Trip, Simona; Bora, Carmen
2016-01-01
Many large-scale cross-national studies rely on a single-item measurement when comparing prevalence rates of traditional bullying, traditional victimization, cyberbullying, and cyber-victimization between countries. However, the reliability and validity of single-item measurement approaches are highly problematic and might be biased. Data from…
ERIC Educational Resources Information Center
Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D.
2012-01-01
In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…
LSI logic for phase-control rectifiers
NASA Technical Reports Server (NTRS)
Dolland, C.
1980-01-01
Signals for controlling phase-controlled rectifier circuit are generated by combinatorial logic than can be implemented in large-scale integration (LSI). LSI circuit saves space, weight, and assembly time compared to previous controls that employ one-shot multivibrators, latches, and capacitors. LSI logic functions by sensing three phases of ac power source and by comparing actual currents with intended currents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš, E-mail: useljak@berkeley.edu
On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less
Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow
Suzuki, Kazuya; Miyazaki, Makito; Takagi, Jun; Itabashi, Takeshi; Ishiwata, Shin’ichi
2017-01-01
Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow. PMID:28265076
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Lix, Lisa M; Wu, Xiuyun; Hopman, Wilma; Mayo, Nancy; Sajobi, Tolulope T; Liu, Juxin; Prior, Jerilynn C; Papaioannou, Alexandra; Josse, Robert G; Towheed, Tanveer E; Davison, K Shawn; Sawatzky, Richard
2016-01-01
Self-reported health status measures, like the Short Form 36-item Health Survey (SF-36), can provide rich information about the overall health of a population and its components, such as physical, mental, and social health. However, differential item functioning (DIF), which arises when population sub-groups with the same underlying (i.e., latent) level of health have different measured item response probabilities, may compromise the comparability of these measures. The purpose of this study was to test for DIF on the SF-36 physical functioning (PF) and mental health (MH) sub-scale items in a Canadian population-based sample. Study data were from the prospective Canadian Multicentre Osteoporosis Study (CaMos), which collected baseline data in 1996-1997. DIF was tested using a multiple indicators multiple causes (MIMIC) method. Confirmatory factor analysis defined the latent variable measurement model for the item responses and latent variable regression with demographic and health status covariates (i.e., sex, age group, body weight, self-perceived general health) produced estimates of the magnitude of DIF effects. The CaMos cohort consisted of 9423 respondents; 69.4% were female and 51.7% were less than 65 years. Eight of 10 items on the PF sub-scale and four of five items on the MH sub-scale exhibited DIF. Large DIF effects were observed on PF sub-scale items about vigorous and moderate activities, lifting and carrying groceries, walking one block, and bathing or dressing. On the MH sub-scale items, all DIF effects were small or moderate in size. SF-36 PF and MH sub-scale scores were not comparable across population sub-groups defined by demographic and health status variables due to the effects of DIF, although the magnitude of this bias was not large for most items. We recommend testing and adjusting for DIF to ensure comparability of the SF-36 in population-based investigations.
Azhar, Badrul; Saadun, Norzanalia; Prideaux, Margi; Lindenmayer, David B
2017-12-01
Most palm oil currently available in global markets is sourced from certified large-scale plantations. Comparatively little is sourced from (typically uncertified) smallholders. We argue that sourcing sustainable palm oil should not be determined by commercial certification alone and that the certification process should be revisited. There are so-far unrecognized benefits of sourcing palm oil from smallholders that should be considered if genuine biodiversity conservation is to be a foundation of 'environmentally sustainable' palm oil production. Despite a lack of certification, smallholder production is often more biodiversity-friendly than certified production from large-scale plantations. Sourcing palm oil from smallholders also alleviates poverty among rural farmers, promoting better conservation outcomes. Yet, certification schemes - the current measure of 'sustainability' - are financially accessible only for large-scale plantations that operate as profit-driven monocultures. Industrial palm oil is expanding rapidly in regions with weak environmental laws and enforcement. This warrants the development of an alternative certification scheme for smallholders. Greater attention should be directed to deforestation-free palm oil production in smallholdings, where production is less likely to cause large scale biodiversity loss. These small-scale farmlands in which palm oil is mixed with other crops should be considered by retailers and consumers who are interested in promoting sustainable palm oil production. Simultaneously, plantation companies should be required to make their existing production landscapes more compatible with enhanced biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars
NASA Astrophysics Data System (ADS)
Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco
2016-04-01
Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.
NASA Astrophysics Data System (ADS)
Huchtmeier, W. K.; Richter, O. G.; Materne, J.
1981-09-01
The large-scale structure of the universe is dominated by clustering. Most galaxies seem to be members of pairs, groups, clusters, and superclusters. To that degree we are able to recognize a hierarchical structure of the universe. Our local group of galaxies (LG) is centred on two large spiral galaxies: the Andromeda nebula and our own galaxy. Three sr:naller galaxies - like M 33 - and at least 23 dwarf galaxies (KraanKorteweg and Tammann, 1979, Astronomische Nachrichten, 300, 181) can be found in the evironment of these two large galaxies. Neighbouring groups have comparable sizes (about 1 Mpc in extent) and comparable numbers of bright members. Small dwarf galaxies cannot at present be observed at great distances.
The Panchromatic Comparative Exoplanetary Treasury Program
NASA Astrophysics Data System (ADS)
Sing, David
2016-10-01
HST has played the definitive role in the characterization of exoplanets and from the first planets available, we have learned that their atmospheres are incredibly diverse. The large number of transiting planets now available has prompted a new era of atmospheric studies, where wide scale comparative planetology is now possible. The atmospheric chemistry of cloud/haze formation and atmospheric mass-loss are a major outstanding issues in the field of exoplanets, and we seek to make progress gaining insight into their underlying physical process through comparative studies. Here we propose to use Hubble's full spectroscopic capabilities to produce the first large-scale, simultaneous UVOIR comparative study of exoplanets. With full wavelength coverage, an entire planet's atmosphere can be probed simultaneously and with sufficient numbers of planets, we can statistically compare their features with physical parameters for the first time. This panchromatic program will build a lasting HST legacy, providing the UV and blue-optical spectra unavailable to JWST. From these observations, chemistry over a wide range of physical environments will be probed, from the hottest condensates to much cooler planets where photochemical hazes could be present. Constraints on aerosol size and composition will help unlock our understanding of clouds and how they are suspended at such high altitudes. Notably, there have been no large transiting UV HST programs, and this panchromatic program will provide a fundamental legacy contribution to atmospheric escape of small exoplanets, where the mass loss can be significant and have a major impact on the evolution of the planet itself.
ERIC Educational Resources Information Center
Wilkin, John P.
2017-01-01
The 1961 Copyright Office study on renewals, authored by Barbara Ringer, has cast an outsized influence on discussions of the U.S. 1923-1963 public domain. As more concrete data emerge from initiatives such as the large-scale determination process in the Copyright Review Management System (CRMS) project, questions are raised about the reliability…
Sensitivity simulations of superparameterised convection in a general circulation model
NASA Astrophysics Data System (ADS)
Rybka, Harald; Tost, Holger
2015-04-01
Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.
Experimental investigation of an ejector-powered free-jet facility
NASA Technical Reports Server (NTRS)
Long, Mary JO
1992-01-01
NASA Lewis Research Center's (LeRC) newly developed Nozzle Acoustic Test Rig (NATR) is a large free-jet test facility powered by an ejector system. In order to assess the pumping performance of this ejector concept and determine its sensitivity to various design parameters, a 1/5-scale model of the NATR was built and tested prior to the operation of the actual facility. This paper discusses the results of the 1/5-scale model tests and compares them with the findings from the full-scale tests.
Solving large scale unit dilemma in electricity system by applying commutative law
NASA Astrophysics Data System (ADS)
Legino, Supriadi; Arianto, Rakhmat
2018-03-01
The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience
NASA Technical Reports Server (NTRS)
Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander
2018-01-01
This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
Experimental study of detonation of large-scale powder-droplet-vapor mixtures
NASA Astrophysics Data System (ADS)
Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.
2018-05-01
Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3}. The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.
Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor
Sheu, Jonathan; Beltzer, Jim; Fury, Brian; Wilczek, Katarzyna; Tobin, Steve; Falconer, Danny; Nolta, Jan; Bauer, Gerhard
2015-01-01
Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs), we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s) and in 10-layer cell factories (CF10s), while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation. PMID:26151065
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Shipeng; Wang, Minghuai; Ghan, Steven J.
Aerosol–cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity ( ω 500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strongmore » large-scale ascent ( ω 500 < −25 hPa day −1) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is close to that in subsidence regimes, which indicates that regimes with strong large-scale ascent are as important as stratocumulus regimes in studying AIE. It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm day −1) contributes the most to the total aerosol indirect forcing (from 64 to nearly 100 %). Results show that the uncertainty in AIE is even larger within specific dynamical regimes compared to the uncertainty in its global mean values, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
Vickers, D.; Thomas, C.
2014-05-13
Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vickers, D.; Thomas, C.
Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less
A plea for a global natural history collection - online
USDA-ARS?s Scientific Manuscript database
Species are the currency of comparative biology: scientists from many biological disciplines, including community ecology, conservation biology, pest management, and biological control rely on scientifically sound, objective species data. However, large-scale species identifications are often not fe...
Compactified cosmological simulations of the infinite universe
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-06-01
We present a novel N-body simulation method that compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to follow the evolution of the large-scale structure. Our approach eliminates the need for periodic boundary conditions, a mere numerical convenience which is not supported by observation and which modifies the law of force on large scales in an unrealistic fashion. We demonstrate that our method outclasses standard simulations executed on workstation-scale hardware in dynamic range, it is balanced in following a comparable number of high and low k modes and, its fundamental geometry and topology match observations. Our approach is also capable of simulating an expanding, infinite universe in static coordinates with Newtonian dynamics. The price of these achievements is that most of the simulated volume has smoothly varying mass and spatial resolution, an approximation that carries different systematics than periodic simulations. Our initial implementation of the method is called StePS which stands for Stereographically projected cosmological simulations. It uses stereographic projection for space compactification and naive O(N^2) force calculation which is nevertheless faster to arrive at a correlation function of the same quality than any standard (tree or P3M) algorithm with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence our code can function as a high-speed prediction tool for modern large-scale surveys. To learn about the limits of the respective methods, we compare StePS with GADGET-2 running matching initial conditions.
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
The effects of streamwise concave curvature on turbulent boundary layer structure
NASA Astrophysics Data System (ADS)
Jeans, A. H.; Johnston, J. P.
1982-06-01
Concave curvature has a relatively large, unpredictable effect on turbulent boundary layers. Some, but not all previous studies suggest that a large-scale, stationary array of counter-rotating vortices exists within the turbulent boundary layer on a concave wall. The objective of the present study was to obtain a qualitative model of the flow field in order to increase our understanding of the underlying physics. A large free-surface water channel was constructed in order to perform a visual study of the flow. Streamwise components of mean velocity and turbulence intensity were measured using a hot film anemometer. The upstream boundary was spanwise uniform with a momentum thickness to radius of curvature of 0.05. Compared to flat wall flow, large-scale, randomly distributed sweeps and ejections were seen in the boundary layer on the concave wall. The sweeps appear to suppress the normal mechanism for turbulence production near the wall by inhibiting the bursting process. The ejections appear to enhance turbulence production in the outer layers as the low speed fluid convected from regions near the wall interacts with the higher speed fluid farther out. The large-scale structures did not occur at fixed spanwise locations, and could not be called roll cells or vortices.
NASA Astrophysics Data System (ADS)
Madaria, Anuj R.; Kumar, Akshay; Zhou, Chongwu
2011-06-01
The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σDC/σOp, for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, ΦTE. Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
Sacchet, Matthew D; Ho, Tiffany C; Connolly, Colm G; Tymofiyeva, Olga; Lewinn, Kaja Z; Han, Laura Km; Blom, Eva H; Tapert, Susan F; Max, Jeffrey E; Frank, Guido Kw; Paulus, Martin P; Simmons, Alan N; Gotlib, Ian H; Yang, Tony T
2016-11-01
Major depressive disorder (MDD) often emerges during adolescence, a critical period of brain development. Recent resting-state fMRI studies of adults suggest that MDD is associated with abnormalities within and between resting-state networks (RSNs). Here we tested whether adolescent MDD is characterized by abnormalities in interactions among RSNs. Participants were 55 unmedicated adolescents diagnosed with MDD and 56 matched healthy controls. Functional connectivity was mapped using resting-state fMRI. We used the network-based statistic (NBS) to compare large-scale connectivity between groups and also compared the groups on graph metrics. We further assessed whether group differences identified using nodes defined from functionally defined RSNs were also evident when using anatomically defined nodes. In addition, we examined relations between network abnormalities and depression severity and duration. Finally, we compared intranetwork connectivity between groups and assessed the replication of previously reported MDD-related abnormalities in connectivity. The NBS indicated that, compared with controls, depressed adolescents exhibited reduced connectivity (p<0.024, corrected) between a specific set of RSNs, including components of the attention, central executive, salience, and default mode networks. The NBS did not identify group differences in network connectivity when using anatomically defined nodes. Longer duration of depression was significantly correlated with reduced connectivity in this set of network interactions (p=0.020, corrected), specifically with reduced connectivity between components of the dorsal attention network. The dorsal attention network was also characterized by reduced intranetwork connectivity in the MDD group. Finally, we replicated previously reported abnormal connectivity in individuals with MDD. In summary, adolescents with MDD show hypoconnectivity between large-scale brain networks compared with healthy controls. Given that connectivity among these networks typically increases during adolescent neurodevelopment, these results suggest that adolescent depression is associated with abnormalities in neural systems that are still developing during this critical period.
Sacchet, Matthew D; Ho, Tiffany C; Connolly, Colm G; Tymofiyeva, Olga; Lewinn, Kaja Z; Han, Laura KM; Blom, Eva H; Tapert, Susan F; Max, Jeffrey E; Frank, Guido KW; Paulus, Martin P; Simmons, Alan N; Gotlib, Ian H; Yang, Tony T
2016-01-01
Major depressive disorder (MDD) often emerges during adolescence, a critical period of brain development. Recent resting-state fMRI studies of adults suggest that MDD is associated with abnormalities within and between resting-state networks (RSNs). Here we tested whether adolescent MDD is characterized by abnormalities in interactions among RSNs. Participants were 55 unmedicated adolescents diagnosed with MDD and 56 matched healthy controls. Functional connectivity was mapped using resting-state fMRI. We used the network-based statistic (NBS) to compare large-scale connectivity between groups and also compared the groups on graph metrics. We further assessed whether group differences identified using nodes defined from functionally defined RSNs were also evident when using anatomically defined nodes. In addition, we examined relations between network abnormalities and depression severity and duration. Finally, we compared intranetwork connectivity between groups and assessed the replication of previously reported MDD-related abnormalities in connectivity. The NBS indicated that, compared with controls, depressed adolescents exhibited reduced connectivity (p<0.024, corrected) between a specific set of RSNs, including components of the attention, central executive, salience, and default mode networks. The NBS did not identify group differences in network connectivity when using anatomically defined nodes. Longer duration of depression was significantly correlated with reduced connectivity in this set of network interactions (p=0.020, corrected), specifically with reduced connectivity between components of the dorsal attention network. The dorsal attention network was also characterized by reduced intranetwork connectivity in the MDD group. Finally, we replicated previously reported abnormal connectivity in individuals with MDD. In summary, adolescents with MDD show hypoconnectivity between large-scale brain networks compared with healthy controls. Given that connectivity among these networks typically increases during adolescent neurodevelopment, these results suggest that adolescent depression is associated with abnormalities in neural systems that are still developing during this critical period. PMID:27238621
Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy
2012-11-01
Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests
NASA Technical Reports Server (NTRS)
Potter, C. S.; Peterson, David L. (Technical Monitor)
1997-01-01
Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.
Macroecological patterns of phytoplankton in the northwestern North Atlantic Ocean.
Li, W K W
2002-09-12
Many issues in biological oceanography are regional or global in scope; however, there are not many data sets of extensive areal coverage for marine plankton. In microbial ecology, a fruitful approach to large-scale questions is comparative analysis wherein statistical data patterns are sought from different ecosystems, frequently assembled from unrelated studies. A more recent approach termed macroecology characterizes phenomena emerging from large numbers of biological units by emphasizing the shapes and boundaries of statistical distributions, because these reflect the constraints on variation. Here, I use a set of flow cytometric measurements to provide macroecological perspectives on North Atlantic phytoplankton communities. Distinct trends of abundance in picophytoplankton and both small and large nanophytoplankton underlaid two patterns. First, total abundance of the three groups was related to assemblage mean-cell size according to the 3/4 power law of allometric scaling in biology. Second, cytometric diversity (an ataxonomic measure of assemblage entropy) was maximal at intermediate levels of water column stratification. Here, intermediate disturbance shapes diversity through an equitable distribution of cells in size classes, from which arises a high overall biomass. By subsuming local fluctuations, macroecology reveals meaningful patterns of phytoplankton at large scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan Vee; Delgado-Frias, Jose
Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less
Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions
NASA Astrophysics Data System (ADS)
O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin
2017-04-01
As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.
NASA Astrophysics Data System (ADS)
Mesinger, F.
The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.
Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J
2014-01-01
Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.
Resolving the substructure of molecular clouds in the LMC
NASA Astrophysics Data System (ADS)
Wong, Tony; Hughes, Annie; Tokuda, Kazuki; Indebetouw, Remy; Wojciechowski, Evan; Bandurski, Jeffrey; MC3 Collaboration
2018-01-01
We present recent wide-field CO and 13CO mapping of giant molecular clouds in the Large Magellanic Cloud with ALMA. Our sample exhibits diverse star-formation properties, and reveals comparably diverse molecular cloud properties including surface density and velocity dispersion at a given scale. We first present the results of a recent study comparing two GMCs at the extreme ends of the star formation activity spectrum. Our quiescent cloud exhibits 10 times lower surface density and 5 times lower velocity dispersion than the active 30 Doradus cloud, yet in both clouds we find a wide range of line widths at the smallest resolved scales, spanning nearly the full range of line widths seen at all scales. This suggests an important role for feedback on sub-parsec scales, while the energetics on larger scales are dominated by clump-to-clump relative velocities. We then extend our analysis to four additional clouds that exhibit intermediate levels of star formation activity.
NASA Astrophysics Data System (ADS)
Kleeorin, N.
2018-06-01
We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.
IslandFAST: A Semi-numerical Tool for Simulating the Late Epoch of Reionization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yidong; Chen, Xuelei; Yue, Bin
2017-08-01
We present the algorithm and main results of our semi-numerical simulation, islandFAST, which was developed from 21cmFAST and designed for the late stage of reionization. The islandFAST simulation predicts the evolution and size distribution of the large-scale underdense neutral regions (neutral islands), and we find that the late Epoch of Reionization proceeds very fast, showing a characteristic scale of the neutral islands at each redshift. Using islandFAST, we compare the impact of two types of absorption systems, i.e., the large-scale underdense neutral islands versus small-scale overdense absorbers, in regulating the reionization process. The neutral islands dominate the morphology of themore » ionization field, while the small-scale absorbers dominate the mean-free path of ionizing photons, and also delay and prolong the reionization process. With our semi-numerical simulation, the evolution of the ionizing background can be derived self-consistently given a model for the small absorbers. The hydrogen ionization rate of the ionizing background is reduced by an order of magnitude in the presence of dense absorbers.« less
NASA Technical Reports Server (NTRS)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
NASA Technical Reports Server (NTRS)
Dittmar, J. H.
1985-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.
Self-organizing Large-scale Structures in Earth's Foreshock Waves
NASA Astrophysics Data System (ADS)
Ganse, U.; Pfau-Kempf, Y.; Turc, L.; Hoilijoki, S.; von Alfthan, S.; Vainio, R. O.; Palmroth, M.
2017-12-01
Earth's foreshock is populated by plasma waves in the ULF regime, assumed to be caused by wave instabilities of shock-reflected particle beams. While in-situ observation of these waves has provided plentiful data of their amplitudes, frequencies, obliquities and relation to local plasma conditions, global-scale structures are hard to grasp from observation data alone. The hybrid-Vlasov simulation system Vlasiator, designed for kinetic modeling of the Earth's magnetosphere, has been employed to study foreshock formation under radial and near-radial IMF conditions on global scales. Structures arising in the foreshock can be comprehensively studied and directly compared to observation results. Our modeling results show that foreshock waves present emergent large-scale structures, in which regions of waves with similar phase exist. At the interfaces of these regions ("spines") we observe high wave obliquity, higher beam densities and lower beam velocities than inside them. We characterize these apparently self-organizing structures through the interplay between wave- and beam properties and present the microphysical mechanisms involved in their creation.
Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A
NASA Technical Reports Server (NTRS)
Dittmar, James H.; Stang, David B.
1987-01-01
Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.
Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2016-06-01
Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.
NASA Astrophysics Data System (ADS)
Burov, E.; Guillou-Frottier, L.
2005-05-01
Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.
NASA Technical Reports Server (NTRS)
Jeong, Su-Jong; Schimel, David; Frankenberg, Christian; Drewry, Darren T.; Fisher, Joshua B.; Verma, Manish; Berry, Joseph A.; Lee, Jung-Eun; Joiner, Joanna
2016-01-01
This study evaluates the large-scale seasonal phenology and physiology of vegetation over northern high latitude forests (40 deg - 55 deg N) during spring and fall by using remote sensing of solar-induced chlorophyll fluorescence (SIF), normalized difference vegetation index (NDVI) and observation-based estimate of gross primary productivity (GPP) from 2009 to 2011. Based on GPP phenology estimation in GPP, the growing season determined by SIF time-series is shorter in length than the growing season length determined solely using NDVI. This is mainly due to the extended period of high NDVI values, as compared to SIF, by about 46 days (+/-11 days), indicating a large-scale seasonal decoupling of physiological activity and changes in greenness in the fall. In addition to phenological timing, mean seasonal NDVI and SIF have different responses to temperature changes throughout the growing season. We observed that both NDVI and SIF linearly increased with temperature increases throughout the spring. However, in the fall, although NDVI linearly responded to temperature increases, SIF and GPP did not linearly increase with temperature increases, implying a seasonal hysteresis of SIF and GPP in response to temperature changes across boreal ecosystems throughout their growing season. Seasonal hysteresis of vegetation at large-scales is consistent with the known phenomena that light limits boreal forest ecosystem productivity in the fall. Our results suggest that continuing measurements from satellite remote sensing of both SIF and NDVI can help to understand the differences between, and information carried by, seasonal variations vegetation structure and greenness and physiology at large-scales across the critical boreal regions.
Comparing NICU teamwork and safety climate across two commonly used survey instruments
Profit, Jochen; Lee, Henry C; Sharek, Paul J; Kan, Peggy; Nisbet, Courtney C; Thomas, Eric J; Etchegaray, Jason M; Sexton, Bryan
2016-01-01
Background and objectives Measurement and our understanding of safety culture are still evolving. The objectives of this study were to assess variation in safety and teamwork climate and in the neonatal intensive care unit (NICU) setting, and compare measurement of safety culture scales using two different instruments (Safety Attitudes Questionnaire (SAQ) and Hospital Survey on Patient Safety Culture (HSOPSC)). Methods Cross-sectional survey study of a voluntary sample of 2073 (response rate 62.9%) health professionals in 44 NICUs. To compare survey instruments, we used Spearman's rank correlation coefficients. We also compared similar scales and items across the instruments using t tests and changes in quartile-level performance. Results We found significant variation across NICUs in safety and teamwork climate scales of SAQ and HSOPSC (p<0.001). Safety scales (safety climate and overall perception of safety) and teamwork scales (teamwork climate and teamwork within units) of the two instruments correlated strongly (safety r=0.72, p<0.001; teamwork r=0.67, p<0.001). However, the means and per cent agreements for all scale scores and even seemingly similar item scores were significantly different. In addition, comparisons of scale score quartiles between the two instruments revealed that half of the NICUs fell into different quartiles when translating between the instruments. Conclusions Large variation and opportunities for improvement in patient safety culture exist across NICUs. Important systematic differences exist between SAQ and HSOPSC such that these instruments should not be used interchangeably. PMID:26700545
Nightside Detection of a Large-Scale Thermospheric Wave Generated by a Solar Eclipse
NASA Astrophysics Data System (ADS)
Harding, B. J.; Drob, D. P.; Buriti, R. A.; Makela, J. J.
2018-04-01
The generation of a large-scale wave in the upper atmosphere caused by a solar eclipse was first predicted in the 1970s, but the experimental evidence remains sparse and comprises mostly indirect observations. This study presents observations of the wind component of a large-scale thermospheric wave generated by the 21 August 2017 total solar eclipse. In contrast with previous studies, the observations are made on the nightside, after the eclipse ended. A ground-based interferometer located in northeastern Brazil is used to monitor the Doppler shift of the 630.0-nm airglow emission, providing direct measurements of the wind and temperature in the thermosphere, where eclipse effects are expected to be the largest. A disturbance is seen in the zonal and meridional wind which is at or above the 90% significance level based on the measured 30-day variability. These observations are compared with a first principles numerical model calculation from the Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model, which predicted the propagation of a large-scale wave well into the nightside. The modeled disturbance matches well the difference between the wind measurements and the 30-day median, though the measured perturbation (˜60 m/s) is larger than the prediction (38 m/s) for the meridional wind. No clear evidence for the wave is seen in the temperature data, however.
paraGSEA: a scalable approach for large-scale gene expression profiling
Peng, Shaoliang; Yang, Shunyun
2017-01-01
Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463
NASA Astrophysics Data System (ADS)
Klingbeil, Knut; Lemarié, Florian; Debreu, Laurent; Burchard, Hans
2018-05-01
The state of the art of the numerics of hydrostatic structured-grid coastal ocean models is reviewed here. First, some fundamental differences in the hydrodynamics of the coastal ocean, such as the large surface elevation variation compared to the mean water depth, are contrasted against large scale ocean dynamics. Then the hydrodynamic equations as they are used in coastal ocean models as well as in large scale ocean models are presented, including parameterisations for turbulent transports. As steps towards discretisation, coordinate transformations and spatial discretisations based on a finite-volume approach are discussed with focus on the specific requirements for coastal ocean models. As in large scale ocean models, splitting of internal and external modes is essential also for coastal ocean models, but specific care is needed when drying & flooding of intertidal flats is included. As one obvious characteristic of coastal ocean models, open boundaries occur and need to be treated in a way that correct model forcing from outside is transmitted to the model domain without reflecting waves from the inside. Here, also new developments in two-way nesting are presented. Single processes such as internal inertia-gravity waves, advection and turbulence closure models are discussed with focus on the coastal scales. Some overview on existing hydrostatic structured-grid coastal ocean models is given, including their extensions towards non-hydrostatic models. Finally, an outlook on future perspectives is made.
A density spike on astrophysical scales from an N-field waterfall transition
NASA Astrophysics Data System (ADS)
Halpern, Illan F.; Hertzberg, Mark P.; Joss, Matthew A.; Sfakianakis, Evangelos I.
2015-09-01
Hybrid inflation models are especially interesting as they lead to a spike in the density power spectrum on small scales, compared to the CMB, while also satisfying current bounds on tensor modes. Here we study hybrid inflation with N waterfall fields sharing a global SO (N) symmetry. The inclusion of many waterfall fields has the obvious advantage of avoiding topologically stable defects for N > 3. We find that it also has another advantage: it is easier to engineer models that can simultaneously (i) be compatible with constraints on the primordial spectral index, which tends to otherwise disfavor hybrid models, and (ii) produce a spike on astrophysically large length scales. The latter may have significant consequences, possibly seeding the formation of astrophysically large black holes. We calculate correlation functions of the time-delay, a measure of density perturbations, produced by the waterfall fields, as a convergent power series in both 1 / N and the field's correlation function Δ (x). We show that for large N, the two-point function is < δt (x) δt (0) > ∝Δ2 (| x |) / N and the three-point function is < δt (x) δt (y) δt (0) > ∝ Δ (| x - y |) Δ (| x |) Δ (| y |) /N2. In accordance with the central limit theorem, the density perturbations on the scale of the spike are Gaussian for large N and non-Gaussian for small N.
NASA Astrophysics Data System (ADS)
Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi
2017-09-01
A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.
Large-scale tomographic particle image velocimetry using helium-filled soap bubbles
NASA Astrophysics Data System (ADS)
Kühn, Matthias; Ehrenfried, Klaus; Bosbach, Johannes; Wagner, Claus
2011-04-01
To measure large-scale flow structures in air, a tomographic particle image velocimetry (tomographic PIV) system for measurement volumes of the order of one cubic metre is developed, which employs helium-filled soap bubbles (HFSBs) as tracer particles. The technique has several specific characteristics compared to most conventional tomographic PIV systems, which are usually applied to small measurement volumes. One of them is spot lights on the HFSB tracers, which slightly change their position, when the direction of observation is altered. Further issues are the large particle to voxel ratio and the short focal length of the used camera lenses, which result in a noticeable variation of the magnification factor in volume depth direction. Taking the specific characteristics of the HFSBs into account, the feasibility of our large-scale tomographic PIV system is demonstrated by showing that the calibration errors can be reduced down to 0.1 pixels as required. Further, an accurate and fast implementation of the multiplicative algebraic reconstruction technique, which calculates the weighting coefficients when needed instead of storing them, is discussed. The tomographic PIV system is applied to measure forced convection in a convection cell at a Reynolds number of 530 based on the inlet channel height and the mean inlet velocity. The size of the measurement volume and the interrogation volumes amount to 750 mm × 450 mm × 165 mm and 48 mm × 48 mm × 24 mm, respectively. Validation of the tomographic PIV technique employing HFSBs is further provided by comparing profiles of the mean velocity and of the root mean square velocity fluctuations to respective planar PIV data.
NASA Astrophysics Data System (ADS)
Walz, M. A.; Donat, M.; Leckebusch, G. C.
2017-12-01
As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.
On large-scale dynamo action at high magnetic Reynolds number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk
2014-07-01
We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less
Inverse Interscale Transport of the Reynolds Shear Stress in Plane Couette Turbulence
NASA Astrophysics Data System (ADS)
Kawata, Takuya; Alfredsson, P. Henrik
2018-06-01
Interscale interaction between small-scale structures near the wall and large-scale structures away from the wall plays an increasingly important role with increasing Reynolds number in wall-bounded turbulence. While the top-down influence from the large- to small-scale structures is well known, it has been unclear whether the small scales near the wall also affect the large scales away from the wall. In this Letter we show that the small-scale near-wall structures indeed play a role to maintain the large-scale structures away from the wall, by showing that the Reynolds shear stress is transferred from small to large scales throughout the channel. This is in contrast to the turbulent kinetic energy transport which is from large to small scales. Such an "inverse" interscale transport of the Reynolds shear stress eventually supports the turbulent energy production at large scales.
ERIC Educational Resources Information Center
Ker, Hsiang-Wei
2017-01-01
Motivational constructs and students' engagements have great impacts on students' mathematics achievements, yet they have not been theoretically investigated using international large-scale assessment data. This study utilized the mathematics data of the Trends in International Mathematics and Science Study 2011 to conduct a comparative and…
Venus analogues on the Earth's ocean floor(?): Volcanic terrains seen by SeaMARC 2 side scan sonar
NASA Technical Reports Server (NTRS)
Mouginis-Mark, P. J.; Fryer, P.; Hussong, D.; Zisk, S. H.
1984-01-01
The geology of Venus is discussed. The approximate age of the surface and the relative importance of large scale volcanic, tectonic and sedimentary processes are not known. Venus holds a very important role in comparative planetology. The investigation of comparable environments to Venus to test ideas of landform development on that planet are proposed.
A Comparative Study of Handicap-Free Life Expectancy of China in 1987 and 2006
ERIC Educational Resources Information Center
Lai, Dejian
2009-01-01
After the first large scale national sampling survey on handicapped persons in 1987, China conducted its second national sampling survey in 2006. Using the data from these two surveys and the national life tables, we computed and compared the expected years of life free of handicapped condition by the Sullivan method. The expected years of life…
A Comparative Study of Geometry in Elementary School Mathematics Textbooks from Five Countries
ERIC Educational Resources Information Center
Wang, Tzu-Ling; Yang, Der-Ching
2016-01-01
The purposes of this study were to compare the differences in the use of geometry in elementary school mathematics textbooks among Finland, Mainland China, Singapore, Taiwan, and the USA and to investigate the relationships between the design of the textbooks and students' performance on large-scale tests such as TIMSS-4 geometry, TIMSS-8…
A Scalable Framework For Segmenting Magnetic Resonance Images
Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar
2009-01-01
A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893
2015-12-02
simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations
Streamline curvature in supersonic shear layers
NASA Technical Reports Server (NTRS)
Kibens, V.
1992-01-01
Results of an experimental investigation in which a curved shear layer was generated between supersonic flow from a rectangular converging/diverging nozzle and the freestream in a series of open channels with varying radii of curvature are reported. The shear layers exhibit unsteady large-scale activity at supersonic pressure ratios, indicating increased mixing efficiency. This effect contrasts with supersonic flow in a straight channel, for which no large-scale vortical structure development occurs. Curvature must exceed a minimum level before it begins to affect the dynamics of the supersonic shear layer appreciably. The curved channel flows are compared with reference flows consisting of a free jet, a straight channel, and wall jets without sidewalls on a flat and a curved plate.
Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell
NASA Astrophysics Data System (ADS)
Lengden, Michael; Cunningham, Robert; Johnstone, Walter
2011-10-01
A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.
Environmental aspects of large-scale wind-power systems in the UK
NASA Astrophysics Data System (ADS)
Robson, A.
1983-12-01
Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the U.K. are discussed. Areas of interest include noise, television interference, hazards to bird life and visual effects. A number of areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first U.K. machines to be introduced in a safe and environmentally acceptable manner. Research currently under way will serve to establish siting criteria more clearly, and could significantly increase the potential wind-energy resource. Certain studies of the comparative risk of energy systems are shown to be overpessimistic for U.K. wind turbines.
ENCAPSULATING WASTE DISPOSAL METHODS - PHASE I
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Schreier, Amy L; Grove, Matt
2014-05-01
The benefits of spatial memory for foraging animals can be assessed on two distinct spatial scales: small-scale space (travel within patches) and large-scale space (travel between patches). While the patches themselves may be distributed at low density, within patches resources are likely densely distributed. We propose, therefore, that spatial memory for recalling the particular locations of previously visited feeding sites will be more advantageous during between-patch movement, where it may reduce the distances traveled by animals that possess this ability compared to those that must rely on random search. We address this hypothesis by employing descriptive statistics and spectral analyses to characterize the daily foraging routes of a band of wild hamadryas baboons in Filoha, Ethiopia. The baboons slept on two main cliffs--the Filoha cliff and the Wasaro cliff--and daily travel began and ended on a cliff; thus four daily travel routes exist: Filoha-Filoha, Filoha-Wasaro, Wasaro-Wasaro, Wasaro-Filoha. We use newly developed partial sum methods and distribution-fitting analyses to distinguish periods of area-restricted search from more extensive movements. The results indicate a single peak in travel activity in the Filoha-Filoha and Wasaro-Filoha routes, three peaks of travel activity in the Filoha-Wasaro routes, and two peaks in the Wasaro-Wasaro routes; and are consistent with on-the-ground observations of foraging and ranging behavior of the baboons. In each of the four daily travel routes the "tipping points" identified by the partial sum analyses indicate transitions between travel in small- versus large-scale space. The correspondence between the quantitative analyses and the field observations suggest great utility for using these types of analyses to examine primate travel patterns and especially in distinguishing between movement in small versus large-scale space. Only the distribution-fitting analyses are inconsistent with the field observations, which may be due to the scale at which these analyses were conducted. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca
2018-06-01
We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.
Varni, James W; Shulman, Robert J; Self, Mariella M; Nurko, Samuel; Saps, Miguel; Saeed, Shehzad A; Bendo, Cristiane B; Patel, Ashish S; Dark, Chelsea Vaughan; Zacur, George M; Pohl, John F
2015-09-01
Patient-reported outcome (PRO) measures of gastrointestinal symptoms are recommended to determine treatment effects for irritable bowel syndrome (IBS) and functional abdominal pain (FAP). Study objectives were to compare the symptom profiles of pediatric patients with IBS or FAP with healthy controls and with each other using the PedsQL Gastrointestinal Symptoms and Gastrointestinal Worry Scales, and to establish clinical interpretability of PRO scale scores through identification of minimal important difference (MID) scores. Gastrointestinal Symptoms and Worry Scales were completed in a 9-site study by 154 pediatric patients and 161 parents (162 families; IBS n = 46, FAP n = 119). Gastrointestinal Symptoms Scales measuring stomach pain, stomach discomfort when eating, food and drink limits, trouble swallowing, heartburn and reflux, nausea and vomiting, gas and bloating, constipation, blood in poop, and diarrhea were administered along with Gastrointestinal Worry Scales. A matched sample of 447 families with healthy children completed the scales. Gastrointestinal Symptoms and Worry Scales distinguished between patients with IBS or FAP compared with healthy controls (P < 0.001), with larger effect sizes (>1.50) for symptoms indicative of IBS or FAP, demonstrating a broad multidimensional gastrointestinal symptom profile and clinical interpretability with MID scores for individual PRO scales. Patients with IBS manifested more symptoms of constipation, gas and bloating, and diarrhea than patients with FAP. Patients with IBS or FAP manifested a broad gastrointestinal symptom profile compared with healthy controls with large differences, indicating the critical need for more effective interventions to bring patient functioning within the range of healthy functioning.
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
LanzaTech- Capturing Carbon. Fueling Growth.
NONE
2018-01-16
LanzaTech will design a gas fermentation system that will significantly improve the rate at which methane gas is delivered to a biocatalyst. Current gas fermentation processes are not cost effective compared to other gas-to-liquid technologies because they are too slow for large-scale production. If successful, LanzaTech's system will process large amounts of methane at a high rate, reducing the energy inputs and costs associated with methane conversion.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
An integrated approach to reconstructing genome-scale transcriptional regulatory networks
Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...
2015-02-27
Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less
Morphological Differences Between Seyfert Hosts and Normal Galaxies
NASA Astrophysics Data System (ADS)
Shlosman, Isaac
Using new sub-arcsecond resolution imaging we compare large-scale stellar bar fraction in CfA sample of Seyferts and a closely matched control sample of normal galaxies. We find a difference between the samples on the 2.5σ level. We further compare the axial ratios of bars in all available samples quoted in the literature and find a deficiency of small axial ratio bars in Seyferts compared to normal galaxies.
Gomez-Velez, Jesus D.; Harvey, Judson
2014-01-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.
NASA Astrophysics Data System (ADS)
Gomez-Velez, Jesus D.; Harvey, Judson W.
2014-09-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.
Andrade, Marcelo C; JÉgu, Michel; Gama, Cecile S
2018-04-03
A new species of Myloplus Gill is described from Eastern Tumucumaque Mountain Range, drainages of the Oyapock and Araguari rivers between Brazil and French Guiana. The new species is diagnosed by having comparatively large scales on the flanks, resulting in lower counts when compared with congeners, i.e., 59 to 70 total perforated scales on lateral line, 31 to 35 longitudinal scales above lateral line, 24 to 29 longitudinal scales below lateral line, and 22 to 26 circumpeduncular scale rows. The new species most closely resembles Myloplus rubripinnis by sharing with this species a general rounded shape, a similar color pattern, and a high number of rays, i.e., 23 to 25 branched dorsal-fin rays and 35 to 38 branched anal-fin rays in the new species (vs. 24 to 25 and 32 to 40, respectively, in M. rubripinnis). After reviewing the available type-specimens of all Myloplus species, M. rubripinnis is re-diagnosed as having higher counts of branched dorsal-fin rays and anal-fin rays combined to tiny scales on flanks, i.e., 85 to 89 total perforated scales on lateral line, 38 to 45 longitudinal scales above lateral line, 33 to 42 longitudinal scales below lateral line, and 30 to 39 circumpeduncular scale rows.
Regional climates in the GISS global circulation model - Synoptic-scale circulation
NASA Technical Reports Server (NTRS)
Hewitson, B.; Crane, R. G.
1992-01-01
A major weakness of current general circulation models (GCMs) is their perceived inability to predict reliably the regional consequences of a global-scale change, and it is these regional-scale predictions that are necessary for studies of human-environmental response. For large areas of the extratropics, the local climate is controlled by the synoptic-scale atmospheric circulation, and it is the purpose of this paper to evaluate the synoptic-scale circulation of the Goddard Institute for Space Studies (GISS) GCM. A methodology for validating the daily synoptic circulation using Principal Component Analysis is described, and the methodology is then applied to the GCM simulation of sea level pressure over the continental United States (excluding Alaska). The analysis demonstrates that the GISS 4 x 5 deg GCM Model II effectively simulates the synoptic-scale atmospheric circulation over the United States. The modes of variance describing the atmospheric circulation of the model are comparable to those found in the observed data, and these modes explain similar amounts of variance in their respective datasets. The temporal behavior of these circulation modes in the synoptic time frame are also comparable.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
Vermeerbergen, Lander; Van Hootegem, Geert; Benders, Jos
2017-02-01
Ongoing shortages of care workers, together with an ageing population, make it of utmost importance to increase the quality of working life in nursing homes. Since the 1970s, normalised and small-scale nursing homes have been increasingly introduced to provide care in a family and homelike environment, potentially providing a richer work life for care workers as well as improved living conditions for residents. 'Normalised' refers to the opportunities given to residents to live in a manner as close as possible to the everyday life of persons not needing care. The study purpose is to provide a synthesis and overview of empirical research comparing the quality of working life - together with related work and health outcomes - of professional care workers in normalised small-scale nursing homes as compared to conventional large-scale ones. A systematic review of qualitative and quantitative studies. A systematic literature search (April 2015) was performed using the electronic databases Pubmed, Embase, PsycInfo, CINAHL and Web of Science. References and citations were tracked to identify additional, relevant studies. We identified 825 studies in the selected databases. After checking the inclusion and exclusion criteria, nine studies were selected for review. Two additional studies were selected after reference and citation tracking. Three studies were excluded after requesting more information on the research setting. The findings from the individual studies suggest that levels of job control and job demands (all but "time pressure") are higher in normalised small-scale homes than in conventional large-scale nursing homes. Additionally, some studies suggested that social support and work motivation are higher, while risks of burnout and mental strain are lower, in normalised small-scale nursing homes. Other studies found no differences or even opposing findings. The studies reviewed showed that these inconclusive findings can be attributed to care workers in some normalised small-scale homes experiencing isolation and too high job demands in their work roles. This systematic review suggests that normalised small-scale homes are a good starting point for creating a higher quality of working life in the nursing home sector. Higher job control enables care workers to manage higher job demands in normalised small-scale homes. However, some jobs would benefit from interventions to address care workers' perceptions of too low social support and of too high job demands. More research is needed to examine strategies to enhance these working life issues in normalised small-scale settings. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guiquan, Xi; Lin, Cong; Xuehui, Jin
2018-05-01
As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.
NASA Astrophysics Data System (ADS)
Brasseur, James G.; Juneja, Anurag
1996-11-01
Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
A comparison of obsessive-compulsive personality disorder scales.
Samuel, Douglas B; Widiger, Thomas A
2010-05-01
In this study, we utilized a large undergraduate sample (N = 536), oversampled for the Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision [DSM-IV-TR]; American Psychiatric Association, 2000) obsessive-compulsive personality disorder (OCPD) pathology, to compare 8 self-report measures of OCPD. No prior study has compared more than 3 measures, and the results indicate that the scales had only moderate convergent validity. We also went beyond the existing literature to compare these scales to 2 external reference points: their relationships with a well-established measure of the five-factor model of personality (FFM) and clinicians' ratings of their coverage of the DSM-IV-TR criterion set. When the FFM was used as a point of comparison, the results suggest important differences among the measures with respect to their divergent representation of conscientiousness, neuroticism, and agreeableness. Additionally, an analysis of the construct coverage indicated that the measures also varied in terms of their representation of particular diagnostic criteria. For example, whereas some scales contained items distributed across the diagnostic criteria, others were concentrated more heavily on particular features of the DSM-IV-TR disorder.
NASA Astrophysics Data System (ADS)
Ibarra, Yadira; Corsetti, Frank A.
2016-04-01
The processes that govern the formation of stromatolites, structures that may represent macroscopic manifestation of microbial processes and a clear target for astrobiological investigation, occur at various scales (local versus regional), yet determining their relative importance remains a challenge, particularly for ancient deposits and/or if similar deposits are discovered elsewhere in the Solar System. We build upon the traditional multiscale level approach of investigation (micro-, meso-, macro-, mega-) by including a lateral comparative investigational component of fine- to large-scale features to determine the relative significance of local and/or nonlocal controls on stromatolite morphology, and in the process, help constrain the dominant influences on microbialite formation. In one example of lateral comparative investigation, lacustrine microbialites from the Miocene Barstow Formation (California) display two main mesofabrics: (1) micritic bands that drastically change in thickness and cannot directly be traced between adjacent decimeter-scale subunits and (2) sparry fibrous layers that are strikingly consistent across subunits, suggesting the formation of sparry fibrous layers was influenced by a process larger than the length scale between the subunits (likely lake chemistry). Microbialites from the uppermost Triassic Cotham Member, United Kingdom, occur as meter-scale mounds and contain a characteristic succession of laminated and dendrolitic mesofabrics. The same succession of laminated/dendrolitic couplets can be traced, not only from mound to mound, but over 100 km, indicating a regional-scale influence on very small structures (microns to centimeters) that would otherwise not be apparent without the lateral comparative approach, and demonstrating that the scale of the feature does not necessarily scale with the scope of the process. Thus, the combination of lateral comparative investigations and multiscale analyses can provide an effective approach for evaluating the dominant controls on stromatolite texture and morphology throughout the rock record and potentially on other planets via rover-scale analyses (e.g., Mars).
NASA Astrophysics Data System (ADS)
O'Neill, J. J.; Cai, X.; Kinnersley, R.
2015-12-01
Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.
NASA Astrophysics Data System (ADS)
El-Ashram, Saeed; Suo, Xun
2017-02-01
Several methods have been proposed for separation of eimerian oocysts and trichostronglyid eggs from extraneous debris; however, these methods have been considered to be still inconvenient in terms of time and wide-ranging applications. We describe herein an alternative way using the combination of electrical cream separator and vacuum filtration for harvesting and purifying eimerian oocysts and haemonchine eggs on large-scale applications with approximately 81% and 92% recovery rates for oocysts and nematode eggs obtained from avian and ovine faeces, correspondingly. The sporulation percentages as a measure of viability in the harvested oocysts and eggs from dry faecal materials are nearly 68% and 74%, respectively, and 12 liters of faecal suspension can be processed in approximately 7.5 min. The mode of separation in terms of costs (i.e. simple laboratory equipments and comparably cheap reagents) and benefits renders the reported procedure an appropriate pursuit to harvest and purify parasite oocysts and eggs on a large scale in the shortest duration from diverse volumes of environmental samples compared to the modified traditional sucrose gradient, which can be employed on a small scale.
El-Ashram, Saeed; Suo, Xun
2017-01-01
Several methods have been proposed for separation of eimerian oocysts and trichostronglyid eggs from extraneous debris; however, these methods have been considered to be still inconvenient in terms of time and wide-ranging applications. We describe herein an alternative way using the combination of electrical cream separator and vacuum filtration for harvesting and purifying eimerian oocysts and haemonchine eggs on large-scale applications with approximately 81% and 92% recovery rates for oocysts and nematode eggs obtained from avian and ovine faeces, correspondingly. The sporulation percentages as a measure of viability in the harvested oocysts and eggs from dry faecal materials are nearly 68% and 74%, respectively, and 12 liters of faecal suspension can be processed in approximately 7.5 min. The mode of separation in terms of costs (i.e. simple laboratory equipments and comparably cheap reagents) and benefits renders the reported procedure an appropriate pursuit to harvest and purify parasite oocysts and eggs on a large scale in the shortest duration from diverse volumes of environmental samples compared to the modified traditional sucrose gradient, which can be employed on a small scale. PMID:28233853
Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena
2015-01-01
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
Large-scale structure perturbation theory without losing stream crossing
NASA Astrophysics Data System (ADS)
McDonald, Patrick; Vlah, Zvonimir
2018-01-01
We suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel'dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel'dovich power spectrum (which is exact in 1D up to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.
Energy Dissipation and Phase-Space Dynamics in Eulerian Vlasov-Maxwell Turbulence
NASA Astrophysics Data System (ADS)
Tenbarge, Jason; Juno, James; Hakim, Ammar
2017-10-01
Turbulence in a magnetized plasma is a primary mechanism responsible for transforming energy at large injection scales into small-scale motions, which are ultimately dissipated as heat in systems such as the solar corona, wind, and other astrophysical objects. At large scales, the turbulence is well described by fluid models of the plasma; however, understanding the processes responsible for heating a weakly collisional plasma such as the solar wind requires a kinetic description. We present a fully kinetic Eulerian Vlasov-Maxwell study of turbulence using the Gkeyll simulation framework, including studies of the cascade of energy in phase space and formation and dissipation of coherent structures. We also leverage the recently developed field-particle correlations to diagnose the dominant sources of dissipation and compare the results of the field-particle correlation to other dissipation measures. NSF SHINE AGS-1622306 and DOE DE-AC02-09CH11466.
Carbon nanotube circuit integration up to sub-20 nm channel lengths.
Shulaker, Max Marcel; Van Rethy, Jelle; Wu, Tony F; Liyanage, Luckshitha Suriyasena; Wei, Hai; Li, Zuanyi; Pop, Eric; Gielen, Georges; Wong, H-S Philip; Mitra, Subhasish
2014-04-22
Carbon nanotube (CNT) field-effect transistors (CNFETs) are a promising emerging technology projected to achieve over an order of magnitude improvement in energy-delay product, a metric of performance and energy efficiency, compared to silicon-based circuits. However, due to substantial imperfections inherent with CNTs, the promise of CNFETs has yet to be fully realized. Techniques to overcome these imperfections have yielded promising results, but thus far only at large technology nodes (1 μm device size). Here we demonstrate the first very large scale integration (VLSI)-compatible approach to realizing CNFET digital circuits at highly scaled technology nodes, with devices ranging from 90 nm to sub-20 nm channel lengths. We demonstrate inverters functioning at 1 MHz and a fully integrated CNFET infrared light sensor and interface circuit at 32 nm channel length. This demonstrates the feasibility of realizing more complex CNFET circuits at highly scaled technology nodes.
Volatility return intervals analysis of the Japanese market
NASA Astrophysics Data System (ADS)
Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.
2008-03-01
We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.
NASA Astrophysics Data System (ADS)
Justham, T.; Jarvis, S.; Clarke, A.; Garner, C. P.; Hargrave, G. K.; Halliwell, N. A.
2006-07-01
Simultaneous intake and in-cylinder digital particle image velocimetry (DPIV) experimental data is presented for a motored spark ignition (SI) optical internal combustion (IC) engine. Two individual DPIV systems were employed to study the inter-relationship between the intake and in-cylinder flow fields at an engine speed of 1500 rpm. Results for the intake runner velocity field at the time of maximum intake valve lift are compared to incylinder velocity fields later in the same engine cycle. Relationships between flow structures within the runner and cylinder were seen to be strong during the intake stroke but less significant during compression. Cyclic variations within the intake runner were seen to affect the large scale bulk flow motion. The subsequent decay of the large scale motions into smaller scale turbulent structures during the compression stroke appear to reduce the relationship with the intake flow variations.
An unbalanced spectra classification method based on entropy
NASA Astrophysics Data System (ADS)
Liu, Zhong-bao; Zhao, Wen-juan
2017-05-01
How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.
Camphor-Enabled Transfer and Mechanical Testing of Centimeter-Scale Ultrathin Films.
Wang, Bin; Luo, Da; Li, Zhancheng; Kwon, Youngwoo; Wang, Meihui; Goo, Min; Jin, Sunghwan; Huang, Ming; Shen, Yongtao; Shi, Haofei; Ding, Feng; Ruoff, Rodney S
2018-05-21
Camphor is used to transfer centimeter-scale ultrathin films onto custom-designed substrates for mechanical (tensile) testing. Compared to traditional transfer methods using dissolving/peeling to remove the support-layers, camphor is sublimed away in air at low temperature, thereby avoiding additional stress on the as-transferred films. Large-area ultrathin films can be transferred onto hollow substrates without damage by this method. Tensile measurements are made on centimeter-scale 300 nm-thick graphene oxide film specimens, much thinner than the ≈2 μm minimum thickness of macroscale graphene-oxide films previously reported. Tensile tests were also done on two different types of large-area samples of adlayer free CVD-grown single-layer graphene supported by a ≈100 nm thick polycarbonate film; graphene stiffens this sample significantly, thus the intrinsic mechanical response of the graphene can be extracted. This is the first tensile measurement of centimeter-scale monolayer graphene films. The Young's modulus of polycrystalline graphene ranges from 637 to 793 GPa, while for near single-crystal graphene, it ranges from 728 to 908 GPa (folds parallel to the tensile loading direction) and from 683 to 775 GPa (folds orthogonal to the tensile loading direction), demonstrating the mechanical performance of large-area graphene in a size scale relevant to many applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Regional turbulence patterns driven by meso- and submesoscale processes in the Caribbean Sea
NASA Astrophysics Data System (ADS)
C. Pérez, Juan G.; R. Calil, Paulo H.
2017-09-01
The surface ocean circulation in the Caribbean Sea is characterized by the interaction between anticyclonic eddies and the Caribbean Upwelling System (CUS). These interactions lead to instabilities that modulate the transfer of kinetic energy up- or down-cascade. The interaction of North Brazil Current rings with the islands leads to the formation of submesoscale vorticity filaments leeward of the Lesser Antilles, thus transferring kinetic energy from large to small scales. Within the Caribbean, the upper ocean dynamic ranges from large-scale currents to coastal upwelling filaments and allow the vertical exchange of physical properties and supply KE to larger scales. In this study, we use a regional model with different spatial resolutions (6, 3, and 1 km), focusing on the Guajira Peninsula and the Lesser Antilles in the Caribbean Sea, in order to evaluate the impact of submesoscale processes on the regional KE energy cascade. Ageostrophic velocities emerge as the Rossby number becomes O(1). As model resolution is increased submesoscale motions are more energetic, as seen by the flatter KE spectra when compared to the lower resolution run. KE injection at the large scales is greater in the Guajira region than in the others regions, being more effectively transferred to smaller scales, thus showing that submesoscale dynamics is key in modulating eddy kinetic energy and the energy cascade within the Caribbean Sea.
NUMERICAL SIMULATIONS OF CORONAL HEATING THROUGH FOOTPOINT BRAIDING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansteen, V.; Pontieu, B. De; Carlsson, M.
2015-10-01
Advanced three-dimensional (3D) radiative MHD simulations now reproduce many properties of the outer solar atmosphere. When including a domain from the convection zone into the corona, a hot chromosphere and corona are self-consistently maintained. Here we study two realistic models, with different simulated areas, magnetic field strength and topology, and numerical resolution. These are compared in order to characterize the heating in the 3D-MHD simulations which self-consistently maintains the structure of the atmosphere. We analyze the heating at both large and small scales and find that heating is episodic and highly structured in space, but occurs along loop-shaped structures, andmore » moves along with the magnetic field. On large scales we find that the heating per particle is maximal near the transition region and that widely distributed opposite-polarity field in the photosphere leads to a greater heating scale height in the corona. On smaller scales, heating is concentrated in current sheets, the thicknesses of which are set by the numerical resolution. Some current sheets fragment in time, this process occurring more readily in the higher-resolution model leading to spatially highly intermittent heating. The large-scale heating structures are found to fade in less than about five minutes, while the smaller, local, heating shows timescales of the order of two minutes in one model and one minutes in the other, higher-resolution, model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold`` and ``hot`` non-baryonic candidates is shown to depend on the assumed ``seeds`` that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, D.N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less
Using SQL Databases for Sequence Similarity Searching and Analysis.
Pearson, William R; Mackey, Aaron J
2017-09-13
Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M
2018-06-05
Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
El-Hady, Nabil M.
1993-01-01
The laminar-turbulent breakdown of a boundary-layer flow along a hollow cylinder at Mach 4.5 is investigated with large-eddy simulation. The subgrid scales are modeled dynamically, where the model coefficients are determined from the local resolved field. The behavior of the dynamic-model coefficients is investigated through both an a priori test with direct numerical simulation data for the same case and a complete large-eddy simulation. Both formulations proposed by Germano et al. and Lilly are used for the determination of unique coefficients for the dynamic model and their results are compared and assessed. The behavior and the energy cascade of the subgrid-scale field structure are investigated at various stages of the transition process. The investigations are able to duplicate a high-speed transition phenomenon observed in experiments and explained only recently by the direct numerical simulations of Pruett and Zang, which is the appearance of 'rope-like' waves. The nonlinear evolution and breakdown of the laminar boundary layer and the structure of the flow field during the transition process were also investigated.
An integrated network of Arabidopsis growth regulators and its use for gene prioritization.
Sabaghian, Ehsan; Drebert, Zuzanna; Inzé, Dirk; Saeys, Yvan
2015-12-01
Elucidating the molecular mechanisms that govern plant growth has been an important topic in plant research, and current advances in large-scale data generation call for computational tools that efficiently combine these different data sources to generate novel hypotheses. In this work, we present a novel, integrated network that combines multiple large-scale data sources to characterize growth regulatory genes in Arabidopsis, one of the main plant model organisms. The contributions of this work are twofold: first, we characterized a set of carefully selected growth regulators with respect to their connectivity patterns in the integrated network, and, subsequently, we explored to which extent these connectivity patterns can be used to suggest new growth regulators. Using a large-scale comparative study, we designed new supervised machine learning methods to prioritize growth regulators. Our results show that these methods significantly improve current state-of-the-art prioritization techniques, and are able to suggest meaningful new growth regulators. In addition, the integrated network is made available to the scientific community, providing a rich data source that will be useful for many biological processes, not necessarily restricted to plant growth.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob; Gonder, Jeff
New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob; Gonder, Jeffrey D
New technologies, such as connected and automated vehicles, have attracted more and more researchers for improving the energy efficiency and environmental impact of current transportation systems. The green routing strategy instructs a vehicle to select the most fuel-efficient route before the vehicle departs. It benefits the current transportation system with fuel saving opportunity through identifying the greenest route. This paper introduces an evaluation framework for estimating benefits of green routing based on large-scale, real-world travel data. The framework has the capability to quantify fuel savings by estimating the fuel consumption of actual routes and comparing to routes procured by navigationmore » systems. A route-based fuel consumption estimation model, considering road traffic conditions, functional class, and road grade is proposed and used in the framework. An experiment using a large-scale data set from the California Household Travel Survey global positioning system trajectory data base indicates that 31% of actual routes have fuel savings potential with a cumulative estimated fuel savings of 12%.« less
NASA Astrophysics Data System (ADS)
Schramm, David N.
1992-07-01
The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold'' and ``hot'' non-baryonic candidates is shown to depend on the assumed ``seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
NASA Astrophysics Data System (ADS)
Schramm, D. N.
1992-03-01
The cosmological dark matter problem is reviewed. The Big Bang nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the omega = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between 'cold' and 'hot' non-baryonic candidates is shown to depend on the assumed 'seeds' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages, and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.
A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds
NASA Technical Reports Server (NTRS)
Diebler, Corey; Smith, Mark
2002-01-01
A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Modeling CMB lensing cross correlations with CLEFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu
2017-08-01
A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
A High-Resolution WRF Tropical Channel Simulation Driven by a Global Reanalysis
NASA Astrophysics Data System (ADS)
Holland, G.; Leung, L.; Kuo, Y.; Hurrell, J.
2006-12-01
Since 2003, NCAR has invested in the development and application of Nested Regional Climate Model (NRCM) based on the Weather Research and Forecasting (WRF) model and the Community Climate System Model, as a key component of the Prediction Across Scales Initiative. A prototype tropical channel model has been developed to investigate scale interactions and the influence of tropical convection on large scale circulation and tropical modes. The model was developed based on the NCAR Weather Research and Forecasting Model (WRF), configured as a tropical channel between 30 ° S and 45 ° N, wide enough to allow teleconnection effects over the mid-latitudes. Compared to the limited area domain that WRF is typically applied over, the channel mode alleviates issues with reflection of tropical modes that could result from imposing east/west boundaries. Using a large amount of available computing resources on a supercomputer (Blue Vista) during its bedding in period, a simulation has been completed with the tropical channel applied at 36 km horizontal resolution for 5 years from 1996 to 2000, with large scale circulation provided by the NCEP/NCAR global reanalysis at the north/south boundaries. Shorter simulations of 2 years and 6 months have also been performed to include two-way nests at 12 km and 4 km resolution, respectively, over the western Pacific warm pool, to explicitly resolve tropical convection in the Maritime Continent. The simulations realistically captured the large-scale circulation including the trade winds over the tropical Pacific and Atlantic, the Australian and Asian monsoon circulation, and hurricane statistics. Preliminary analysis and evaluation of the simulations will be presented.
Thomson, William Murray; Malden, Penelope Elizabeth
2011-09-01
To examine the properties, validity and responsiveness of the Family Impact Scale in a consecutive clinical sample of patients undergoing dental treatment under general anaesthesia. A consecutive clinical sample of parents/caregivers of children receiving dental treatment under general anaesthesia provided data using the Family Impact Scale (FIS) component of the COHQOL(©) Questionnaire. The first questionnaire was completed before treatment, the follow-up questionnaire 1-4 weeks afterward. Treatment-associated changes in the FIS and its components were determined by comparing baseline and follow-up data. Baseline and follow-up data were obtained for 202 and 130 participants, respectively (64.4% follow-up). All FIS items showed large relative decreases in prevalence, the greatest seen in those relating to having sleep disrupted, blaming others, being upset, the child requiring more attention, financial difficulties and having to take time off work. Factor analysis largely confirmed the underlying factor structure, with three sub-scales (parental/family, parental emotions and family conflict) identified. The parental/family and parental emotions sub-scales showed the greatest treatment-associated improvement, with large effect sizes. There was a moderate improvement in scores on the family conflict sub-scale. The overall FIS showed a large improvement. Treating children with severe caries under general anaesthesia results in OHRQoL improvements for the family. Severe dental caries is not merely a restorative and preventive challenge for those who treat children; it has far-reaching effects on those who share the household and care for the affected child.
Kidane, A.; Hepelwa, A.; Tingum, E.; Hu, T.W.
2016-01-01
In this study an attempt is made to compare the efficiency in tobacco leaf production with three other cereals – maize, ground nut and rice – commonly grown by Tanzanian small scale farmers. The paper reviews the prevalence of tobacco use in Africa with that of the developed world; while there was a decline in the latter there appears to be an increase in the former. The economic benefit and costs of tobacco production and consumption in Tanzania are also compared. Using a nationally representative large scale data we were able to observe that modern agricultural inputs allotted to tobacco was much higher than those allotted to maize, ground nut and rice. Using A Frontier Production approach, the study shows that the efficiency of tobacco, maize, groundnuts and rice were 75.3%, 68.5%, 64.5% and 46.5% respectively. Despite the infusion of massive agricultural input allotted to it, tobacco is still 75.3% efficient-tobacco farmers should have produced the same amount by utilizing only 75.3% of realized inputs. The relatively high efficiency in tobacco can only be explained by the large scale allocation of modern agricultural inputs such as fertilizer, better seeds, credit facility and easy access to market. The situation is likely to be reversed if more allocation of inputs were directed to basic food crops such as maize, rice and ground nuts. Tanzania’s policy of food security and poverty alleviation can only be achieved by allocating more modern inputs to basic necessities such as maize and rice. PMID:28124032
Large scale filaments associated with Milky Way spiral arms
NASA Astrophysics Data System (ADS)
Wang, Ke; Testi, Leonardo; Ginsburg, Adam; Walmsley, Malcolm; Molinari, Sergio; Schisano, Eugenio
2015-08-01
The ubiquity of filamentary structure at various scales through out the Galaxy has triggered a renewed interest in their formation, evolution, and role in star formation. The largest filaments can reach up to Galactic scale as part of the spiral arm structure. However, such large scale filaments are hard to identify systematically due to limitations in identifying methodology (i.e., as extinction features). We present a new approach to directly search for the largest, coldest, and densest filaments in the Galaxy, making use of sensitive Herschel Hi-GAL data complemented by spectral line cubes. We present a sample of the 9 most prominent Herschel filaments from a pilot search field. These filaments measure 37-99 pc long and 0.6-3.0 pc wide with masses (0.5-8.3)×104 Msun, and beam-averaged (28", or 0.4-0.7 pc) peak H2 column densities of (1.7-9.3)x1022 cm-2. The bulk of the filaments are relatively cold (17-21 K), while some local clumps have a dust temperature up to 25-47 K due to local star formation activities. All the filaments are located within <~60 pc from the Galactic mid-plane. Comparing the filaments to a recent spiral arm model incorporating the latest parallax measurements, we find that 7/9 of them reside within arms, but most are close to arm edges. These filaments are comparable in length to the Galactic scale height and therefore are not simply part of a grander turbulent cascade. These giant filaments, which often contain regularly spaced pc-scale clumps, are much larger than the filaments found in the Herschel Gould's Belt Survey, and they form the upper ends in the filamentary hierarchy. Full operational ALMA and NOEMA will be able to resolve and characterize similar filaments in nearby spiral galaxies, allowing us to compare the star formation in a uniform context of spiral arms.
NASA Technical Reports Server (NTRS)
Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.
2017-01-01
A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.
GPS-based household interview survey for the Cincinnati, Ohio Region.
DOT National Transportation Integrated Search
2012-02-01
Methods for Conducting a Large-Scale GPS-Only Survey of Households: Past Household Travel Surveys (HTS) in the United States have only piloted small subsamples of Global Positioning Systems (GPS) completes compared with 1-2 day self-reported travel i...
Our Place in the Spongy Universe
ERIC Educational Resources Information Center
Bogner, Donna; Wentworth, Benning L.; Ristvey, John; Yanow, Gil; Wiens, Roger
2006-01-01
Physicist James Trefil once describes the universe as "The Spongy Universe," comparing large-scale cosmic structures to the structure of a sponge. The NASA Genesis education module "Cosmic Chemistry: Cosmogony" features the "Spongy Universe" activity in which pairs of students observe a household sponge, making…
Characterization and Bioactivity of Hydrolysates produced from Aflatoxin Contaminated Peanut Meal
USDA-ARS?s Scientific Manuscript database
Justification: Interest in protein hydrolysates is increasing because of their improved functionality and health benefits, particularly angiotensin-converting enzyme (ACE) inhibition, compared to their parent proteins. Large-scale production of hydrolysates is expensive, and one way to minimize co...
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.
Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan
2013-06-27
Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html.
Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing
2013-01-01
Background Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Results Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Conclusions Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available for third-party implementation and use, and can be downloaded from http://s3.amazonaws.com/jnj_rainbow/index.html. PMID:23802613
NASA Astrophysics Data System (ADS)
Folsom, C. P.; Bouvier, J.; Petit, P.; Lèbre, A.; Amard, L.; Palacios, A.; Morin, J.; Donati, J.-F.; Vidotto, A. A.
2018-03-01
There is a large change in surface rotation rates of sun-like stars on the pre-main sequence and early main sequence. Since these stars have dynamo-driven magnetic fields, this implies a strong evolution of their magnetic properties over this time period. The spin-down of these stars is controlled by interactions between stellar and magnetic fields, thus magnetic evolution in turn plays an important role in rotational evolution. We present here the second part of a study investigating the evolution of large-scale surface magnetic fields in this critical time period. We observed stars in open clusters and stellar associations with known ages between 120 and 650 Myr, and used spectropolarimetry and Zeeman Doppler Imaging to characterize their large-scale magnetic field strength and geometry. We report 15 stars with magnetic detections here. These stars have masses from 0.8 to 0.95 M⊙, rotation periods from 0.326 to 10.6 d, and we find large-scale magnetic field strengths from 8.5 to 195 G with a wide range of geometries. We find a clear trend towards decreasing magnetic field strength with age, and a power law decrease in magnetic field strength with Rossby number. There is some tentative evidence for saturation of the large-scale magnetic field strength at Rossby numbers below 0.1, although the saturation point is not yet well defined. Comparing to younger classical T Tauri stars, we support the hypothesis that differences in internal structure produce large differences in observed magnetic fields, however for weak-lined T Tauri stars this is less clear.
Gray, B.R.; Shi, W.; Houser, J.N.; Rogala, J.T.; Guan, Z.; Cochran-Biederman, J. L.
2011-01-01
Ecological restoration efforts in large rivers generally aim to ameliorate ecological effects associated with large-scale modification of those rivers. This study examined whether the effects of restoration efforts-specifically those of island construction-within a largely open water restoration area of the Upper Mississippi River (UMR) might be seen at the spatial scale of that 3476ha area. The cumulative effects of island construction, when observed over multiple years, were postulated to have made the restoration area increasingly similar to a positive reference area (a proximate area comprising contiguous backwater areas) and increasingly different from two negative reference areas. The negative reference areas represented the Mississippi River main channel in an area proximate to the restoration area and an open water area in a related Mississippi River reach that has seen relatively little restoration effort. Inferences on the effects of restoration were made by comparing constrained and unconstrained models of summer chlorophyll a (CHL), summer inorganic suspended solids (ISS) and counts of benthic mayfly larvae. Constrained models forced trends in means or in both means and sampling variances to become, over time, increasingly similar to those in the positive reference area and increasingly dissimilar to those in the negative reference areas. Trends were estimated over 12- (mayflies) or 14-year sampling periods, and were evaluated using model information criteria. Based on these methods, restoration effects were observed for CHL and mayflies while evidence in favour of restoration effects on ISS was equivocal. These findings suggest that the cumulative effects of island building at relatively large spatial scales within large rivers may be estimated using data from large-scale surveillance monitoring programs. Published in 2010 by John Wiley & Sons, Ltd.
The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum
NASA Astrophysics Data System (ADS)
Frith, W. J.; Outram, P. J.; Shanks, T.
2005-06-01
We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.
Harada, Sei; Hirayama, Akiyoshi; Chan, Queenie; Kurihara, Ayako; Fukai, Kota; Iida, Miho; Kato, Suzuka; Sugiyama, Daisuke; Kuwabara, Kazuyo; Takeuchi, Ayano; Akiyama, Miki; Okamura, Tomonori; Ebbels, Timothy M D; Elliott, Paul; Tomita, Masaru; Sato, Asako; Suzuki, Chizuru; Sugimoto, Masahiro; Soga, Tomoyoshi; Takebayashi, Toru
2018-01-01
Cohort studies with metabolomics data are becoming more widespread, however, large-scale studies involving 10,000s of participants are still limited, especially in Asian populations. Therefore, we started the Tsuruoka Metabolomics Cohort Study enrolling 11,002 community-dwelling adults in Japan, and using capillary electrophoresis-mass spectrometry (CE-MS) and liquid chromatography-mass spectrometry. The CE-MS method is highly amenable to absolute quantification of polar metabolites, however, its reliability for large-scale measurement is unclear. The aim of this study is to examine reproducibility and validity of large-scale CE-MS measurements. In addition, the study presents absolute concentrations of polar metabolites in human plasma, which can be used in future as reference ranges in a Japanese population. Metabolomic profiling of 8,413 fasting plasma samples were completed using CE-MS, and 94 polar metabolites were structurally identified and quantified. Quality control (QC) samples were injected every ten samples and assessed throughout the analysis. Inter- and intra-batch coefficients of variation of QC and participant samples, and technical intraclass correlation coefficients were estimated. Passing-Bablok regression of plasma concentrations by CE-MS on serum concentrations by standard clinical chemistry assays was conducted for creatinine and uric acid. In QC samples, coefficient of variation was less than 20% for 64 metabolites, and less than 30% for 80 metabolites out of the 94 metabolites. Inter-batch coefficient of variation was less than 20% for 81 metabolites. Estimated technical intraclass correlation coefficient was above 0.75 for 67 metabolites. The slope of Passing-Bablok regression was estimated as 0.97 (95% confidence interval: 0.95, 0.98) for creatinine and 0.95 (0.92, 0.96) for uric acid. Compared to published data from other large cohort measurement platforms, reproducibility of metabolites common to the platforms was similar to or better than in the other studies. These results show that our CE-MS platform is suitable for conducting large-scale epidemiological studies.