A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.
Lessons from a Large-Scale Assessment: Results from Conceptual Inventories
ERIC Educational Resources Information Center
Thacker, Beth; Dulli, Hani; Pattillo, Dave; West, Keith
2014-01-01
We report conceptual inventory results of a large-scale assessment project at a large university. We studied the introduction of materials and instructional methods informed by physics education research (PER) (physics education research-informed materials) into a department where most instruction has previously been traditional and a significant…
Large Scale Underground Detectors in Europe
NASA Astrophysics Data System (ADS)
Katsanevas, S. K.
2006-07-01
The physics potential and the complementarity of the large scale underground European detectors: Water Cherenkov (MEMPHYS), Liquid Argon TPC (GLACIER) and Liquid Scintillator (LENA) is presented with emphasis on the major physics opportunities, namely proton decay, supernova detection and neutrino parameter determination using accelerator beams.
Stability of knotted vortices in wave chaos
NASA Astrophysics Data System (ADS)
Taylor, Alexander; Dennis, Mark
Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Cohort Profile of the Goals Study: A Large-Scale Research of Physical Activity in Dutch Students
ERIC Educational Resources Information Center
de Groot, Renate H. M.; van Dijk, Martin L.; Kirschner, Paul A.
2015-01-01
The GOALS study (Grootschalig Onderzoek naar Activiteiten van Limburgse Scholieren [Large-scale Research of Activities in Dutch Students]) was set up to investigate possible associations between different forms of physical activity and inactivity with cognitive performance, academic achievement and mental well-being. It was conducted at a…
Dewetting and Hydrophobic Interaction in Physical and Biological Systems
Berne, Bruce J.; Weeks, John D.; Zhou, Ruhong
2013-01-01
Hydrophobicity manifests itself differently on large and small length scales. This review focuses on large length scale hydrophobicity, particularly on dewetting at single hydrophobic surfaces and drying in regions bounded on two or more sides by hydrophobic surfaces. We review applicable theories, simulations and experiments pertaining to large scale hydrophobicity in physical and biomoleclar systems and clarify some of the critical issues pertaining to this subject. Given space constraints, we could not review all of the significant and interesting work in this very active field. PMID:18928403
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2017-05-01
GenASiS Basics provides Fortran 2003 classes furnishing extensible object-oriented utilitarian functionality for large-scale physics simulations on distributed memory supercomputers. This functionality includes physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. This revision -Version 2 of Basics - makes mostly minor additions to functionality and includes some simplifying name changes.
Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation
2016-08-02
PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED
Enabling large-scale viscoelastic calculations via neural network acceleration
NASA Astrophysics Data System (ADS)
Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.
2017-12-01
One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.
Physical activity correlates with neurological impairment and disability in multiple sclerosis.
Motl, Robert W; Snook, Erin M; Wynn, Daniel R; Vollmer, Timothy
2008-06-01
This study examined the correlation of physical activity with neurological impairment and disability in persons with multiple sclerosis (MS). Eighty individuals with MS wore an accelerometer for 7 days and completed the Symptom Inventory (SI), Performance Scales (PS), and Expanded Disability Status Scale. There were large negative correlations between the accelerometer and SI (r = -0.56; rho = -0.58) and Expanded Disability Status Scale (r = -0.60; rho = -0.69) and a moderate negative correlation between the accelerometer and PS (r = -0.39; rho = -0.48) indicating that physical activity was associated with reduced neurological impairment and disability. Such findings provide a preliminary basis for using an accelerometer and the SI and PS as outcome measures in large-scale prospective and experimental examinations of the effect of physical activity behavior on disability and dependence in MS.
NASA Astrophysics Data System (ADS)
Black, R. X.
2017-12-01
We summarize results from a project focusing on regional temperature and precipitation extremes over the continental United States. Our project introduces a new framework for evaluating these extremes emphasizing their (a) large-scale organization, (b) underlying physical sources (including remote-excitation and scale-interaction) and (c) representation in climate models. Results to be reported include the synoptic-dynamic behavior, seasonality and secular variability of cold waves, dry spells and heavy rainfall events in the observational record. We also study how the characteristics of such extremes are systematically related to Northern Hemisphere planetary wave structures and thus planetary- and hemispheric-scale forcing (e.g., those associated with major El Nino events and Arctic sea ice change). The underlying physics of event onset are diagnostically quantified for different categories of events. Finally, the representation of these extremes in historical coupled climate model simulations is studied and the origins of model biases are traced using new metrics designed to assess the large-scale atmospheric forcing of local extremes.
ERIC Educational Resources Information Center
Hampden-Thompson, Gillian; Lubben, Fred; Bennett, Judith
2011-01-01
Quantitative secondary analysis of large-scale data can be combined with in-depth qualitative methods. In this paper, we discuss the role of this combined methods approach in examining the uptake of physics and chemistry in post compulsory schooling for students in England. The secondary data analysis of the National Pupil Database (NPD) served…
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Extracting Primordial Non-Gaussianity from Large Scale Structure in the Post-Planck Era
NASA Astrophysics Data System (ADS)
Dore, Olivier
Astronomical observations have become a unique tool to probe fundamental physics. Cosmology, in particular, emerged as a data-driven science whose phenomenological modeling has achieved great success: in the post-Planck era, key cosmological parameters are measured to percent precision. A single model reproduces a wealth of astronomical observations involving very distinct physical processes at different times. This success leads to fundamental physical questions. One of the most salient is the origin of the primordial perturbations that grew to form the large-scale structures we now observe. More and more cosmological observables point to inflationary physics as the origin of the structure observed in the universe. Inflationary physics predict the statistical properties of the primordial perturbations and it is thought to be slightly non-Gaussian. The detection of this small deviation from Gaussianity represents the next frontier in early Universe physics. To measure it would provide direct, unique and quantitative insights about the physics at play when the Universe was only a fraction of a second old, thus probing energies untouchable otherwise. En par with the well-known relic gravitational wave radiation -- the famous ``B-modes'' -- it is one the few probes of inflation. This departure from Gaussianity leads to very specific signature in the large scale clustering of galaxies. Observing large-scale structure, we can thus establish a direct connection with fundamental theories of the early universe. In the post-Planck era, large-scale structures are our most promising pathway to measuring this primordial signal. Current estimates suggests that the next generation of space or ground based large scale structure surveys (e.g. the ESA EUCLID or NASA WFIRST missions) might enable a detection of this signal. This potential huge payoff requires us to solidify the theoretical predictions supporting these measurements. Even if the exact signal we are looking for is of unknown amplitude, it is obvious that we must measure it as well as these ground breaking data set will permit. We propose to develop the supporting theoretical work to the point where the complete non-gaussianian signature can be extracted from these data sets. We will do so by developing three complementary directions: - We will develop the appropriate formalism to measure and model galaxy clustering on the largest scales. - We will study the impact of non-Gaussianity on higher-order statistics, the most promising statistics for our purpose.. - We will explicit the connection between these observables and the microphysics of a large class of inflation models, but also identify fundamental limitations to this interpretation.
Potential for geophysical experiments in large scale tests.
Dieterich, J.H.
1981-01-01
Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author
Multiscale solvers and systematic upscaling in computational physics
NASA Astrophysics Data System (ADS)
Brandt, A.
2005-07-01
Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).
Physics implications of the diphoton excess from the perspective of renormalization group flow
Gu, Jiayin; Liu, Zhen
2016-04-06
A very plausible explanation for the recently observed diphoton excess at the 13 TeV LHC is a (pseudo)scalar with mass around 750 GeV, which couples to a gluon pair and to a photon pair through loops involving vector-like quarks (VLQs). To accommodate the observed rate, the required Yukawa couplings tend to be large. A large Yukawa coupling would rapidly run up with the scale and quickly reach the perturbativity bound, indicating that new physics, possibly with a strong dynamics origin, is near by. The case becomes stronger especially if the ATLAS observation of a large width persists. In this papermore » we study the implication on the scale of new physics from the 750 GeV diphoton excess using the method of renormalization group running with careful treatment of different contributions and perturbativity criterion. Our results suggest that the scale of new physics is generically not much larger than the TeV scale, in particular if the width of the hinted (pseudo)scalar is large. Introducing multiple copies of VLQs, lowing the VLQ masses and enlarging VLQ electric charges help reduce the required Yukawa couplings and can push the cutoff scale to higher values. Nevertheless, if the width of the 750 GeV resonance turns out to be larger than about 1 GeV, it is very hard to increase the cutoff scale beyond a few TeVs. This is a strong hint that new particles in addition to the 750 GeV resonance and the vector-like quarks should be around the TeV scale.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
Magnetic fields are observed to exist on all scales in many astrophysical sources such as stars, galaxies, and accretion discs. Understanding the origin of large scale magnetic fields, whereby the field emerges on spatial scales large compared to the fluctuations, has been a particularly long standing challenge. Our physics objective are: 1) what are the minimum ingredients for large-scale dynamo growth? 2) could a large-scale magnetic field grow out of turbulence and sustained despite the presence of dissipation? These questions are fundamental for understanding the large-scale dynamo in both laboratory and astrophysical plasmas. Here, we report major new findings inmore » the area of Large-Scale Dynamo (magnetic field generation).« less
What Will the Neighbors Think? Building Large-Scale Science Projects Around the World
Jones, Craig; Mrotzek, Christian; Toge, Nobu; Sarno, Doug
2017-12-22
Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.
ERIC Educational Resources Information Center
Burgin, Rick A.
2012-01-01
Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…
Physical habitat monitoring strategy (PHAMS) for reach-scale restoration effectiveness monitoring
Jones, Krista L.; O'Daniel, Scott J.; Beechie, Tim J.; Zakrajsek, John; Webster, John G.
2015-04-14
Habitat restoration efforts by the Confederated Tribes of the Umatilla Indian Reservation (CTUIR) have shifted from the site scale (1-10 meters) to the reach scale (100-1,000 meters). This shift was in response to the growing scientific emphasis on process-based restoration and to support from the 2007 Accords Agreement with the Bonneville Power Administration. With the increased size of restoration projects, the CTUIR and other agencies are in need of applicable monitoring methods for assessing large-scale changes in river and floodplain habitats following restoration. The goal of the Physical Habitat Monitoring Strategy is to outline methods that are useful for capturing reach-scale changes in surface and groundwater hydrology, geomorphology, hydrologic connectivity, and riparian vegetation at restoration projects. The Physical Habitat Monitoring Strategy aims to avoid duplication with existing regional effectiveness monitoring protocols by identifying complimentary reach-scale metrics and methods that may improve the ability of CTUIR and others to detect instream and riparian changes at large restoration projects.
An Novel Architecture of Large-scale Communication in IOT
NASA Astrophysics Data System (ADS)
Ma, Wubin; Deng, Su; Huang, Hongbin
2018-03-01
In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.
ERIC Educational Resources Information Center
Saul, Jeffery M.; Deardorff, Duane L.; Abbott, David S.; Allain, Rhett J.; Beichner, Robert J.
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project at North Carolina State University (NCSU) is developing a curriculum to promote learning through in-class group activities in introductory physics classes up to 100 students. The authors are currently in Phase II of the project using a specially designed…
Use of Analogy in Learning Physics: The Role of Representations
ERIC Educational Resources Information Center
Podolefsky, Noah S.; Finkelstein, Naoh D.
2006-01-01
Previous studies have demonstrated that analogies can promote student learning in physics and can be productively taught to students to support their learning, under certain conditions. We build on these studies to explore the use of analogy by students in a large introductory college physics course. In the first large-scale study of its kind, we…
The physics behind the larger scale organization of DNA in eukaryotes.
Emanuel, Marc; Radja, Nima Hamedani; Henriksson, Andreas; Schiessel, Helmut
2009-07-01
In this paper, we discuss in detail the organization of chromatin during a cell cycle at several levels. We show that current experimental data on large-scale chromatin organization have not yet reached the level of precision to allow for detailed modeling. We speculate in some detail about the possible physics underlying the larger scale chromatin organization.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Spectral enstrophy budget in a shear-less flow with turbulent/non-turbulent interface
NASA Astrophysics Data System (ADS)
Cimarelli, Andrea; Cocconi, Giacomo; Frohnapfel, Bettina; De Angelis, Elisabetta
2015-12-01
A numerical analysis of the interaction between decaying shear free turbulence and quiescent fluid is performed by means of global statistical budgets of enstrophy, both, at the single-point and two point levels. The single-point enstrophy budget allows us to recognize three physically relevant layers: a bulk turbulent region, an inhomogeneous turbulent layer, and an interfacial layer. Within these layers, enstrophy is produced, transferred, and finally destroyed while leading to a propagation of the turbulent front. These processes do not only depend on the position in the flow field but are also strongly scale dependent. In order to tackle this multi-dimensional behaviour of enstrophy in the space of scales and in physical space, we analyse the spectral enstrophy budget equation. The picture consists of an inviscid spatial cascade of enstrophy from large to small scales parallel to the interface moving towards the interface. At the interface, this phenomenon breaks, leaving place to an anisotropic cascade where large scale structures exhibit only a cascade process normal to the interface thus reducing their thickness while retaining their lengths parallel to the interface. The observed behaviour could be relevant for both the theoretical and the modelling approaches to flow with interacting turbulent/nonturbulent regions. The scale properties of the turbulent propagation mechanisms highlight that the inviscid turbulent transport is a large-scale phenomenon. On the contrary, the viscous diffusion, commonly associated with small scale mechanisms, highlights a much richer physics involving small lengths, normal to the interface, but at the same time large scales, parallel to the interface.
Large-scale physical activity data reveal worldwide activity inequality
Althoff, Tim; Sosič, Rok; Hicks, Jennifer L.; King, Abby C.; Delp, Scott L.; Leskovec, Jure
2018-01-01
Understanding the basic principles that govern physical activity is needed to curb the global pandemic of physical inactivity1–7 and the 5.3 million deaths per year associated with in-activity2. Our knowledge, however, remains limited owing to the lack of large-scale measurements of physical activity patterns across free-living populations worldwide1, 6. Here, we leverage the wide usage of smartphones with built-in accelerometry to measure physical activity at planetary scale. We study a dataset consisting of 68 million days of physical activity for 717,527 people, giving us a window into activity in 111 countries across the globe. We find inequality in how activity is distributed within countries and that this inequality is a better predictor of obesity prevalence in the population than average activity volume. Reduced activity in females contributes to a large portion of the observed activity inequality. Aspects of the built environment, such as the walkability of a city, were associated with less gender gap in activity and activity inequality. In more walkable cities, activity is greater throughout the day and throughout the week, across age, gender, and body mass index (BMI) groups, with the greatest increases in activity for females. Our findings have implications for global public health policy and urban planning and highlight the role of activity inequality and the built environment for improving physical activity and health. PMID:28693034
Scale-Up: Improving Large Enrollment Physics Courses
NASA Astrophysics Data System (ADS)
Beichner, Robert
1999-11-01
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.
ERIC Educational Resources Information Center
Behrman, Joanna
2017-01-01
Technologies such as electrical appliances entered American households on a large scale only after many decades of promotion to the public. The genre of "household physics" textbooks was one such form of promotion that was directed towards assumed white, female and largely middle-class home economics students. Published from the 1910s to…
Statewide Physical Fitness Testing: A BIG Waist or a BIG Waste?
ERIC Educational Resources Information Center
Morrow, James R., Jr.; Ede, Alison
2009-01-01
Statewide physical fitness testing is gaining popularity in the United States because of increased childhood obesity levels, the relations between physical fitness and academic performance, and the hypothesized relations between adult characteristics and childhood physical activity, physical fitness, and health behaviors. Large-scale physical…
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
USDA-ARS?s Scientific Manuscript database
Soil hydraulic properties can be retrieved from physical sampling of soil, via surveys, but this is time consuming and only as accurate as the scale of the sample. Remote sensing provides an opportunity to get pertinent soil properties at large scales, which is very useful for large scale modeling....
ERIC Educational Resources Information Center
Gaffney, Jon D. H.; Richards, Evan; Kustusch, Mary Bridget; Ding, Lin; Beichner, Robert J.
2008-01-01
The SCALE-UP (Student-Centered Activities for Large Enrollment for Undergraduate Programs) project was developed to implement reforms designed for small classes into large physics classes. Over 50 schools across the country, ranging from Wake Technical Community College to Massachusetts Institute of Technology (MIT), have adopted it for classes of…
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Rare b-hadron decays as probe of new physics
NASA Astrophysics Data System (ADS)
Lanfranchi, Gaia
2018-05-01
The unexpected absence of unambiguous signals of New Physics (NP) at the TeV scale at the Large Hadron Collider (LHC) puts today flavor physics at the forefront. In particular, rare decays of b-hadrons represent a unique probe to challenge the Standard Model (SM) paradigm and test models of NP at a scale much higher than that accessible by direct searches. This article reviews the status of the field.
Effect of small scale transport processes on phytoplankton distribution in coastal seas.
Hernández-Carrasco, Ismael; Orfila, Alejandro; Rossi, Vincent; Garçon, Veronique
2018-06-05
Coastal ocean ecosystems are major contributors to the global biogeochemical cycles and biological productivity. Physical factors induced by the turbulent flow play a crucial role in regulating marine ecosystems. However, while large-scale open-ocean dynamics is well described by geostrophy, the role of multiscale transport processes in coastal regions is still poorly understood due to the lack of continuous high-resolution observations. Here, the influence of small-scale dynamics (O(3.5-25) km, i.e. spanning upper submesoscale and mesoscale processes) on surface phytoplankton derived from satellite chlorophyll-a (Chl-a) is studied using Lagrangian metrics computed from High-Frequency Radar currents. The combination of complementary Lagrangian diagnostics, including the Lagrangian divergence along fluid trajectories, provides an improved description of the 3D flow geometry which facilitates the interpretation of two non-exclusive physical mechanisms affecting phytoplankton dynamics and patchiness. Attracting small-scale fronts, unveiled by backwards Lagrangian Coherent Structures, are associated to negative divergence where particles and Chl-a standing stocks cluster. Filaments of positive divergence, representing large accumulated upward vertical velocities and suggesting accrued injection of subsurface nutrients, match areas with large Chl-a concentrations. Our findings demonstrate that an accurate characterization of small-scale transport processes is necessary to comprehend bio-physical interactions in coastal seas.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
NASA Technical Reports Server (NTRS)
Liu, J. T. C.
1986-01-01
Advances in the mechanics of boundary layer flow are reported. The physical problems of large scale coherent structures in real, developing free turbulent shear flows, from the nonlinear aspects of hydrodynamic stability are addressed. The presence of fine grained turbulence in the problem, and its absence, lacks a small parameter. The problem is presented on the basis of conservation principles, which are the dynamics of the problem directed towards extracting the most physical information, however, it is emphasized that it must also involve approximations.
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations
NASA Technical Reports Server (NTRS)
Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.
2015-01-01
Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
Activity-Based Introductory Physics Reform *
NASA Astrophysics Data System (ADS)
Thornton, Ronald
2004-05-01
Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to those of good traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). RealTime Physics promotes interaction among students in a laboratory setting and makes use of powerful real-time data logging tools to teach concepts as well as quantitative relationships. An active learning environment is often difficult to achieve in large lecture sessions and Workshop Physics and Scale-Up largely eliminate lectures in favor of collaborative student activities. Peer Instruction, Just in Time Teaching, and Interactive Lecture Demonstrations (ILDs) make lectures more interactive in complementary ways. This presentation will introduce these reforms and use Interactive Lecture Demonstrations (ILDs) with the audience to illustrate the types of curricula and tools used in the curricula above. ILDs make use real experiments, real-time data logging tools and student interaction to create an active learning environment in large lecture classes. A short video of students involved in interactive lecture demonstrations will be shown. The results of research studies at various institutions to measure the effectiveness of these methods will be presented.
Lee, I-Min; Shiroma, Eric J
2013-01-01
Background Current guidelines for aerobic activity require that adults carry out ≥150 minutes/week of moderate-intensity physical activity, with a large body of epidemiologic evidence showing this level of activity to decrease the incidence of many chronic diseases. Less is known about whether light-intensity activities also have such benefits, and whether sedentary behavior is an independent predictor of increased risks of these chronic diseases, as imprecise assessments of these behaviours and cross-sectional study designs have limited knowledge to date. Methods Recent technological advances in assessment methods have made the use of movement sensors, such as the accelerometer, feasible for use in longitudinal, large-scale epidemiologic studies. Several such studies are collecting sensor-assessed, objective measures of physical activity with the aim of relating these to the development of clinical endpoints. This is a relatively new area of research; thus, in this paper, we use the Women’s Health Study (WHS) as a case study to illustrate challenges related to data collection, data processing, and analyses of the vast amount of data collected. Results The WHS plans to collect 7 days of accelerometer-assessed physical activity and sedentary behavior in ~18,000 women aged ≥62 years. Several logistical challenges exist in collecting data; nonetheless as of 31 August 2013, 11,590 women have already provided some data. Additionally, the WHS experience on data reduction and data analyses can help inform other similar large-scale epidemiologic studies. Conclusions Important data on the health effects of light-intensity activity and sedentary behaviour will emerge from large-scale epidemiologic studies collecting objective assessments of these behaviours. PMID:24297837
Phase-relationships between scales in the perturbed turbulent boundary layer
NASA Astrophysics Data System (ADS)
Jacobi, I.; McKeon, B. J.
2017-12-01
The phase-relationship between large-scale motions and small-scale fluctuations in a non-equilibrium turbulent boundary layer was investigated. A zero-pressure-gradient flat plate turbulent boundary layer was perturbed by a short array of two-dimensional roughness elements, both statically, and under dynamic actuation. Within the compound, dynamic perturbation, the forcing generated a synthetic very-large-scale motion (VLSM) within the flow. The flow was decomposed by phase-locking the flow measurements to the roughness forcing, and the phase-relationship between the synthetic VLSM and remaining fluctuating scales was explored by correlation techniques. The general relationship between large- and small-scale motions in the perturbed flow, without phase-locking, was also examined. The synthetic large scale cohered with smaller scales in the flow via a phase-relationship that is similar to that of natural large scales in an unperturbed flow, but with a much stronger organizing effect. Cospectral techniques were employed to describe the physical implications of the perturbation on the relative orientation of large- and small-scale structures in the flow. The correlation and cospectral techniques provide tools for designing more efficient control strategies that can indirectly control small-scale motions via the large scales.
``Large''- vs Small-scale friction control in turbulent channel flow
NASA Astrophysics Data System (ADS)
Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp
2017-11-01
We reconsider the ``large-scale'' control scheme proposed by Hussain and co-workers (Phys. Fluids 10, 1049-1051 1998 and Phys. Rev. Fluids, 2, 62601 2017), using new direct numerical simulations (DNS). The DNS are performed in a turbulent channel at friction Reynolds number Reτ of up to 550 in order to eliminate low-Reynolds-number effects. The purpose of the present contribution is to re-assess this control method in the light of more modern developments in the field, in particular also related to the discovery of (very) large-scale motions. The goals of the paper are as follows: First, we want to better characterise the physics of the control, and assess what external contribution (vortices, forcing, wall motion) are actually needed. Then, we investigate the optimal parameters and, finally, determine which aspects of this control technique actually scale in outer units and can therefore be of use in practical applications. In addition to discussing the mentioned drag-reduction effects, the present contribution will also address the potential effect of the naturally occurring large-scale motions on frictional drag, and give indications on the physical processes for potential drag reduction possible at all Reynolds numbers.
Optical mapping and its potential for large-scale sequencing projects.
Aston, C; Mishra, B; Schwartz, D C
1999-07-01
Physical mapping has been rediscovered as an important component of large-scale sequencing projects. Restriction maps provide landmark sequences at defined intervals, and high-resolution restriction maps can be assembled from ensembles of single molecules by optical means. Such optical maps can be constructed from both large-insert clones and genomic DNA, and are used as a scaffold for accurately aligning sequence contigs generated by shotgun sequencing.
Density dependence, spatial scale and patterning in sessile biota.
Gascoigne, Joanna C; Beadman, Helen A; Saurel, Camille; Kaiser, Michel J
2005-09-01
Sessile biota can compete with or facilitate each other, and the interaction of facilitation and competition at different spatial scales is key to developing spatial patchiness and patterning. We examined density and scale dependence in a patterned, soft sediment mussel bed. We followed mussel growth and density at two spatial scales separated by four orders of magnitude. In summer, competition was important at both scales. In winter, there was net facilitation at the small scale with no evidence of density dependence at the large scale. The mechanism for facilitation is probably density dependent protection from wave dislodgement. Intraspecific interactions in soft sediment mussel beds thus vary both temporally and spatially. Our data support the idea that pattern formation in ecological systems arises from competition at large scales and facilitation at smaller scales, so far only shown in vegetation systems. The data, and a simple, heuristic model, also suggest that facilitative interactions in sessile biota are mediated by physical stress, and that interactions change in strength and sign along a spatial or temporal gradient of physical stress.
Skin Friction Reduction Through Large-Scale Forcing
NASA Astrophysics Data System (ADS)
Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer
2017-11-01
Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
Towards physics responsible for large-scale Lyman-α forest bias parameters
Agnieszka M. Cieplak; Slosar, Anze
2016-03-08
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Towards physics responsible for large-scale Lyman-α forest bias parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnieszka M. Cieplak; Slosar, Anze
Using a series of carefully constructed numerical experiments based on hydrodynamic cosmological SPH simulations, we attempt to build an intuition for the relevant physics behind the large scale density (b δ) and velocity gradient (b η) biases of the Lyman-α forest. Starting with the fluctuating Gunn-Peterson approximation applied to the smoothed total density field in real-space, and progressing through redshift-space with no thermal broadening, redshift-space with thermal broadening and hydrodynamically simulated baryon fields, we investigate how approximations found in the literature fare. We find that Seljak's 2012 analytical formulae for these bias parameters work surprisingly well in the limit ofmore » no thermal broadening and linear redshift-space distortions. We also show that his b η formula is exact in the limit of no thermal broadening. Since introduction of thermal broadening significantly affects its value, we speculate that a combination of large-scale measurements of b η and the small scale flux PDF might be a sensitive probe of the thermal state of the IGM. Lastly, we find that large-scale biases derived from the smoothed total matter field are within 10–20% to those based on hydrodynamical quantities, in line with other measurements in the literature.« less
NASA Astrophysics Data System (ADS)
Kaki, K.; Okiharu, F.; Tajima, S.; Takayama, H.; Watanabe, M. O.
2013-03-01
The results of a 2007 large-scale survey of gender equality in scientific and technological professions in Japan are reported. The activities of two Japanese physics societies in the three years since the 3rd IUPAP International Conference on Women in Physics was held in 2008 are reported.
Advanced Computing Tools and Models for Accelerator Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryne, Robert; Ryne, Robert D.
2008-06-11
This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.
Statewide Physical Fitness Testing: Perspectives from the Gym
ERIC Educational Resources Information Center
Martin, Scott B.; Ede, Alison; Morrow, James R., Jr.; Jackson, Allen W.
2010-01-01
This paper provides observations of physical fitness testing in Texas schools and physical education teachers' insights about large-scale testing using the FITNESSGRAM[R] assessment (Cooper Institute, 2007) as mandated by Texas Senate Bill 530. In the first study, undergraduate and graduate students who were trained to observe and assess student…
Emergency response health physics.
Mena, Rajah; Pemberton, Wendy; Beal, William
2012-05-01
Health physics is an important discipline with regard to understanding the effects of radiation on human health. This paper aims to illustrate the unique challenges presented to the health physicist or analyst of radiological data in a large-scale emergency.
An Illustrative Guide to the Minerva Framework
NASA Astrophysics Data System (ADS)
Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration
2017-10-01
Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Jimenez-Pardo, J; Holmes, J D; Jenkins, M E; Johnson, A M
2015-07-01
Physical activity is generally thought to be beneficial to individuals with Parkinson's disease (PD). There is, however, limited information regarding current rates of physical activity among individuals with PD, possibly due to a lack of well-validated measurement tools. In the current study we sampled 63 individuals (31 women) living with PD between the ages of 52 and 87 (M = 70.97 years, SD = 7.53), and evaluated the amount of physical activity in which they engaged over a 7-day period using a modified form of the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD). The PASIPD was demonstrated to be a reliable measure within this population, with three theoretically defensible factors: (1) housework and home-based outdoor activities; (2) recreational and fitness activities; and (3) occupational activities. These results suggest that the PASIPD may be useful for monitoring physical activity involvement among individuals with PD, particularly within large-scale questionnaire-based studies.
Inner space/outer space - The interface between cosmology and particle physics
NASA Astrophysics Data System (ADS)
Kolb, Edward W.; Turner, Michael S.; Lindley, David; Olive, Keith; Seckel, David
A collection of papers covering the synthesis between particle physics and cosmology is presented. The general topics addressed include: standard models of particle physics and cosmology; microwave background radiation; origin and evolution of large-scale structure; inflation; massive magnetic monopoles; supersymmetry, supergravity, and quantum gravity; cosmological constraints on particle physics; Kaluza-Klein cosmology; and future directions and connections in particle physics and cosmology.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Spin determination at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Yavin, Itay
The quantum field theory describing the Electroweak sector demands some new physics at the TeV scale in order to unitarize the scattering of longitudinal W bosons. If this new physics takes the form of a scalar Higgs boson then it is hard to understand the huge hierarchy of scales between the Electroweak scale ˜ TeV and the Planck scale ˜ 1019 GeV. This is known as the Naturalness problem. Normally, in order to solve this problem, new particles, in addition to the Higgs boson, are required to be present in the spectrum below a few TeV. If such particles are indeed discovered at the Large Hadron Collider it will become important to determine their spin. Several classes of models for physics beyond the Electroweak scale exist. Determining the spin of any such newly discovered particle could prove to be the only means of distinguishing between these different models. In the first part of this thesis; we present a thorough discussion regarding such a measurement. We survey the different potentially useful channels for spin determination and a detailed analysis of the most promising channel is performed. The Littlest Higgs model offers a way to solve the Hierarchy problem by introduring heavy partners to Standard Model particles with the same spin and quantum numbers. However, this model is only good up to ˜ 10 TeV. In the second part of this thesis we present an extension of this model into a strongly coupled theory above ˜ 10 TeV. We use the celebrated AdS/CFT correspondence to calculate properties of the low-energy physics in terms of high-energy parameters. We comment on some of the tensions inherent to such a construction involving a large-N CFT (or equivalently, an AdS space).
NASA Astrophysics Data System (ADS)
Tréguer, Paul; Goberville, Eric; Barrier, Nicolas; L'Helguen, Stéphane; Morin, Pascal; Bozec, Yann; Rimmelin-Maury, Peggy; Czamanski, Marie; Grossteffan, Emilie; Cariou, Thierry; Répécaud, Michel; Quéméner, Loic
2014-11-01
There is now a strong scientific consensus that coastal marine systems of Western Europe are highly sensitive to the combined effects of natural climate variability and anthropogenic climate change. However, it still remains challenging to assess the spatial and temporal scales at which climate influence operates. While large-scale hydro-climatic indices, such as the North Atlantic Oscillation (NAO) or the East Atlantic Pattern (EAP) and the weather regimes such as the Atlantic Ridge (AR), are known to be relevant predictors of physical processes, changes in coastal waters can also be related to local hydro-meteorological and geochemical forcing. Here, we study the temporal variability of physical and chemical characteristics of coastal waters located at about 48°N over the period 1998-2013 using (1) sea surface temperature, (2) sea surface salinity and (3) nutrient concentration observations for two coastal sites located at the outlet of the Bay of Brest and off Roscoff, (4) river discharges of the major tributaries close to these two sites and (5) regional and local precipitation data over the region of interest. Focusing on the winter months, we characterize the physical and chemical variability of these coastal waters and document changes in both precipitation and river runoffs. Our study reveals that variability in coastal waters is connected to the large-scale North Atlantic atmospheric circulation but is also partly explained by local river influences. Indeed, while the NAO is strongly related to changes in sea surface temperature at the Brest and Roscoff sites, the EAP and the AR have a major influence on precipitations, which in turn modulate river discharges that impact sea surface salinity at the scale of the two coastal stations.
Coronal hole evolution by sudden large scale changes
NASA Technical Reports Server (NTRS)
Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.
1978-01-01
Sudden shifts in coronal-hole boundaries observed by the S-054 X-ray telescope on Skylab between May and November, 1973, within 1 day of CMP of the holes, at latitudes not exceeding 40 deg, are compared with the long-term evolution of coronal-hole area. It is found that large-scale shifts in boundary locations can account for most if not all of the evolution of coronal holes. The temporal and spatial scales of these large-scale changes imply that they are the results of a physical process occurring in the corona. It is concluded that coronal holes evolve by magnetic-field lines' opening when the holes are growing, and by fields' closing as the holes shrink.
Saunders, Ruth P.; McIver, Kerry L.; Dowda, Marsha; Pate, Russell R.
2013-01-01
Objective Scales used to measure selected social-cognitive beliefs and motives for physical activity were tested among boys and girls. Methods Covariance modeling was applied to responses obtained from large multi-ethnic samples of students in the fifth and sixth grades. Results Theoretically and statistically sound models were developed, supporting the factorial validity of the scales in all groups. Multi-group longitudinal invariance was confirmed between boys and girls, overweight and normal weight students, and non-Hispanic black and white children. The construct validity of the scales was supported by hypothesized convergent and discriminant relationships within a measurement model that included correlations with physical activity (MET • min/day) measured by an accelerometer. Conclusions Scores from the scales provide valid assessments of selected beliefs and motives that are putative mediators of change in physical activity among boys and girls, as they begin the understudied transition from the fifth grade into middle school, when physical activity naturally declines. PMID:23459310
Dishman, Rod K; Saunders, Ruth P; McIver, Kerry L; Dowda, Marsha; Pate, Russell R
2013-06-01
Scales used to measure selected social-cognitive beliefs and motives for physical activity were tested among boys and girls. Covariance modeling was applied to responses obtained from large multi-ethnic samples of students in the fifth and sixth grades. Theoretically and statistically sound models were developed, supporting the factorial validity of the scales in all groups. Multi-group longitudinal invariance was confirmed between boys and girls, overweight and normal weight students, and non-Hispanic black and white children. The construct validity of the scales was supported by hypothesized convergent and discriminant relationships within a measurement model that included correlations with physical activity (MET • min/day) measured by an accelerometer. Scores from the scales provide valid assessments of selected beliefs and motives that are putative mediators of change in physical activity among boys and girls, as they begin the understudied transition from the fifth grade into middle school, when physical activity naturally declines.
ERIC Educational Resources Information Center
Robinson, Daniel B.; Randall, Lynn
2016-01-01
This article summarizes results from a recently completed study that focused upon the current state and possible future of physical education within Canada's four Atlantic provinces. Data from both large-scale surveys and eight follow-up focus group interviews are shared as they relate to the state and future of physical education, possible…
Scales and scaling in turbulent ocean sciences; physics-biology coupling
NASA Astrophysics Data System (ADS)
Schmitt, Francois
2015-04-01
Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.
Identifying Predictors of Physics Item Difficulty: A Linear Regression Approach
ERIC Educational Resources Information Center
Mesic, Vanes; Muratovic, Hasnija
2011-01-01
Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary…
Physical Education and Sport at School in Europe
ERIC Educational Resources Information Center
Kerpanova, Viera; Borodankova, Olga
2013-01-01
"Physical Education and Sport at School in Europe" maps the state of play of physical education and sport activities at school in 30 European countries. The report covers primary and lower secondary education and provides an insight into the following topics: national strategies and large-scale initiatives where they exist, the status of…
Nuclear Lessons for Cyber Security
2011-01-01
major kinetic violence. In the physical world, governments have a near monopoly on large - scale use of force, the defender has an intimate knowledge of...with this transformative technology. Until now, the issue of cyber security has largely been the domain of computer experts and specialists. When the...with increasing economic returns to scale and political practices that make jurisdictional control difficult. Attacks from the informational realm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@auecc.aichi-edu.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
We report the first successful simulation of spontaneous formation of surface magnetic structures from a large-scale dynamo by strongly stratified thermal convection in Cartesian geometry. The large-scale dynamo observed in our strongly stratified model has physical properties similar to those in earlier weakly stratified convective dynamo simulations, indicating that the α {sup 2}-type mechanism is responsible for the dynamo. In addition to the large-scale dynamo, we find that large-scale structures of the vertical magnetic field are spontaneously formed in the convection zone (CZ) surface only in cases with a strongly stratified atmosphere. The organization of the vertical magnetic field proceedsmore » in the upper CZ within tens of convective turnover time and band-like bipolar structures recurrently appear in the dynamo-saturated stage. We consider several candidates to be possibly be the origin of the surface magnetic structure formation, and then suggest the existence of an as-yet-unknown mechanism for the self-organization of the large-scale magnetic structure, which should be inherent in the strongly stratified convective atmosphere.« less
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...
2018-05-14
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
A simulation study demonstrating the importance of large-scale trailing vortices in wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Application of Small-Scale Systems: Evaluation of Alternatives
John Wilhoit; Robert Rummer
1999-01-01
Large-scale mechanized systems are not well-suited for harvesting smaller tracts of privately owned forest land. New alternative small-scale harvesting systems are needed which utilize mechanized felling, have a low capital investment requirement, are small in physical size, and are based primarily on adaptations of current harvesting technology. This paper presents...
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Cluster galaxy dynamics and the effects of large-scale environment
NASA Astrophysics Data System (ADS)
White, Martin; Cohn, J. D.; Smit, Renske
2010-11-01
Advances in observational capabilities have ushered in a new era of multi-wavelength, multi-physics probes of galaxy clusters and ambitious surveys are compiling large samples of cluster candidates selected in different ways. We use a high-resolution N-body simulation to study how the influence of large-scale structure in and around clusters causes correlated signals in different physical probes and discuss some implications this has for multi-physics probes of clusters (e.g. richness, lensing, Compton distortion and velocity dispersion). We pay particular attention to velocity dispersions, matching galaxies to subhaloes which are explicitly tracked in the simulation. We find that not only do haloes persist as subhaloes when they fall into a larger host, but groups of subhaloes retain their identity for long periods within larger host haloes. The highly anisotropic nature of infall into massive clusters, and their triaxiality, translates into an anisotropic velocity ellipsoid: line-of-sight galaxy velocity dispersions for any individual halo show large variance depending on viewing angle. The orientation of the velocity ellipsoid is correlated with the large-scale structure, and thus velocity outliers correlate with outliers caused by projection in other probes. We quantify this orientation uncertainty and give illustrative examples. Such a large variance suggests that velocity dispersion estimators will work better in an ensemble sense than for any individual cluster, which may inform strategies for obtaining redshifts of cluster members. We similarly find that the ability of substructure indicators to find kinematic substructures is highly viewing angle dependent. While groups of subhaloes which merge with a larger host halo can retain their identity for many Gyr, they are only sporadically picked up by substructure indicators. We discuss the effects of correlated scatter on scaling relations estimated through stacking, both analytically and in the simulations, showing that the strong correlation of measures with mass and the large scatter in mass at fixed observable mitigate line-of-sight projections.
Propulsion simulator for magnetically-suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.
1991-01-01
The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.
On the contributions of astroparticle physics to cosmology
NASA Astrophysics Data System (ADS)
Falkenburg, Brigitte
2014-05-01
Studying astroparticle physics sheds new light on scientific explanation and on the ways in which cosmology is empirically underdetermined or not. Astroparticle physics extends the empirical domain of cosmology from purely astronomical data to "multi-messenger astrophysics", i.e., measurements of all kinds of cosmic rays including very high energetic gamma rays, neutrinos, and charged particles. My paper investigates the ways in which these measurements contribute to cosmology and compares them with philosophical views about scientific explanation, the relation between theory and data, and scientific realism. The "standard models" of cosmology and particle physics lack of unified foundations. Both are "piecemeal physics" in Cartwright's sense, but contrary to her metaphysics of a "dappled world" the work in both fields of research aims at unification. Cosmology proceeds "top-down", from models to data and from large scale to small-scale structures of the universe. Astroparticle physics proceeds "bottom-up", from data taking to models and from subatomic particles to large-scale structures of the universe. In order to reconstruct the causal stories of cosmic rays and the nature of their sources, several pragmatic unifying strategies are employed. Standard views about scientific explanation and scientific realism do not cope with these "bottom-up" strategies and the way in which they contribute to cosmology. In addition it has to be noted that the shift to "multi-messenger astrophysics" transforms the relation between cosmological theory and astrophysical data in a mutually holistic way.
Turbulent pipe flow at extreme Reynolds numbers.
Hultmark, M; Vallikivi, M; Bailey, S C C; Smits, A J
2012-03-02
Both the inherent intractability and complex beauty of turbulence reside in its large range of physical and temporal scales. This range of scales is captured by the Reynolds number, which in nature and in many engineering applications can be as large as 10(5)-10(6). Here, we report turbulence measurements over an unprecedented range of Reynolds numbers using a unique combination of a high-pressure air facility and a new nanoscale anemometry probe. The results reveal previously unknown universal scaling behavior for the turbulent velocity fluctuations, which is remarkably similar to the well-known scaling behavior of the mean velocity distribution.
RENEB - Running the European Network of biological dosimetry and physical retrospective dosimetry.
Kulka, Ulrike; Abend, Michael; Ainsbury, Elizabeth; Badie, Christophe; Barquinero, Joan Francesc; Barrios, Lleonard; Beinke, Christina; Bortolin, Emanuela; Cucu, Alexandra; De Amicis, Andrea; Domínguez, Inmaculada; Fattibene, Paola; Frøvig, Anne Marie; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Jaworska, Alicja; Kriehuber, Ralf; Lindholm, Carita; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Meschini, Roberta; Mörtl, Simone; Della Monaca, Sara; Monteiro Gil, Octávia; Montoro, Alegria; Moquet, Jayne; Moreno, Mercedes; Oestreicher, Ursula; Palitti, Fabrizio; Pantelias, Gabriel; Patrono, Clarice; Piqueret-Stephan, Laure; Port, Matthias; Prieto, María Jesus; Quintens, Roel; Ricoul, Michelle; Romm, Horst; Roy, Laurence; Sáfrány, Géza; Sabatier, Laure; Sebastià, Natividad; Sommer, Sylwester; Terzoudi, Georgia; Testa, Antonella; Thierens, Hubert; Turai, Istvan; Trompier, François; Valente, Marco; Vaz, Pedro; Voisin, Philippe; Vral, Anne; Woda, Clemens; Zafiropoulos, Demetre; Wojcik, Andrzej
2017-01-01
A European network was initiated in 2012 by 23 partners from 16 European countries with the aim to significantly increase individualized dose reconstruction in case of large-scale radiological emergency scenarios. The network was built on three complementary pillars: (1) an operational basis with seven biological and physical dosimetric assays in ready-to-use mode, (2) a basis for education, training and quality assurance, and (3) a basis for further network development regarding new techniques and members. Techniques for individual dose estimation based on biological samples and/or inert personalized devices as mobile phones or smart phones were optimized to support rapid categorization of many potential victims according to the received dose to the blood or personal devices. Communication and cross-border collaboration were also standardized. To assure long-term sustainability of the network, cooperation with national and international emergency preparedness organizations was initiated and links to radiation protection and research platforms have been developed. A legal framework, based on a Memorandum of Understanding, was established and signed by 27 organizations by the end of 2015. RENEB is a European Network of biological and physical-retrospective dosimetry, with the capacity and capability to perform large-scale rapid individualized dose estimation. Specialized to handle large numbers of samples, RENEB is able to contribute to radiological emergency preparedness and wider large-scale research projects.
ERIC Educational Resources Information Center
Kirkup, Les; Pizzica, Jenny; Waite, Katrina; Srinivasan, Lakshmi
2010-01-01
Physics experiments for students not majoring in physics may have little meaning for those students and appear to them unconnected in any way to their majors. This affects student engagement and influences the extent to which they regard their experiences in the physics laboratory as positive. We apply a framework for the development and…
2016-03-31
22 4.5.2.2 Sources and Physics of F10.7...INTRODUCTION The Sun’s strong photospheric magnetic field plays a key role in the plasma physics of the solar atmosphere and thus significantly influences...coronal and solar wind physics ; it is also the sole large-scale physical observable readily measured from Earth or spacecraft. The photospheric magnetic
Impact of new physics on the EW vacuum stability in a curved spacetime background
NASA Astrophysics Data System (ADS)
Bentivegna, E.; Branchina, V.; Contino, F.; Zappalà, D.
2017-12-01
It has been recently shown that, contrary to an intuitive decoupling argument, the presence of new physics at very large energy scales (say around the Planck scale) can have a strong impact on the electroweak vacuum lifetime. In particular, the vacuum could be totally destabilized. This study was performed in a flat spacetime background, and it is important to extend the analysis to curved spacetime since these are Planckian-physics effects. It is generally expected that under these extreme conditions gravity should totally quench the formation of true vacuum bubbles, thus washing out the destabilizing effect of new physics. In this work we extend the analysis to curved spacetime and show that, although gravity pushes toward stabilization, the destabilizing effect of new physics is still (by far) the dominating one. In order to get model independent results, high energy new physics is parametrized in two different independent ways: as higher order operators in the Higgs field, or introducing new particles with very large masses. The destabilizing effect is observed in both cases, hinting at a general mechanism that does not depend on the parametrization details for new physics, thus maintaining the results obtained from the analysis performed in flat spacetime.
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
Where the Wild Things Are: Observational Constraints on Black Holes' Growth
NASA Astrophysics Data System (ADS)
Merloni, Andrea
2009-12-01
The physical and evolutionary relation between growing supermassive black holes (AGN) and host galaxies is currently the subject of intense research activity. Nevertheless, a deep theoretical understanding of such a relation is hampered by the unique multi-scale nature of the combined AGN-galaxy system, which defies any purely numerical, or semi-analytic approach. Various physical process active on different physical scales have signatures in different parts of the electromagnetic spectrum; thus, observations at different wavelengths and theoretical ideas all can contribute towards a ``large dynamic range'' view of the AGN phenomenon, capable of conceptually ``resolving'' the many scales involved. As an example, I will focus in this review on two major recent observational results on the cosmic evolution of supermassive black holes, focusing on the novel contribution given to the field by the COSMOS survey. First of all, I will discuss the evidence for the so-called ``downsizing'' in the AGN population as derived from large X-ray surveys. I will then present new constraints on the evolution of the black hole-galaxy scaling relation at 1
Galaxy formation and physical bias
NASA Technical Reports Server (NTRS)
Cen, Renyue; Ostriker, Jeremiah P.
1992-01-01
We have supplemented our code, which computes the evolution of the physical state of a representative piece of the universe to include, not only the dynamics of dark matter (with a standard PM code), and the hydrodynamics of the gaseous component (including detailed collisional and radiative processes), but also galaxy formation on a heuristic but plausible basis. If, within a cell the gas is Jeans' unstable, collapsing, and cooling rapidly, it is transformed to galaxy subunits, which are then followed with a collisionless code. After grouping them into galaxies, we estimate the relative distributions of galaxies and dark matter and the relative velocities of galaxies and dark matter. In a large scale CDM run of 80/h Mpc size with 8 x 10 exp 6 cells and dark matter particles, we find that physical bias b is on the 8/h Mpc scale is about 1.6 and increases towards smaller scales, and that velocity bias is about 0.8 on the same scale. The comparable HDM simulation is highly biased with b = 2.7 on the 8/h Mpc scale. Implications of these results are discussed in the light of the COBE observations which provide an accurate normalization for the initial power spectrum. CDM can be ruled out on the basis of too large a predicted small scale velocity dispersion at greater than 95 percent confidence level.
Probing high scale physics with top quarks at the Large Hadron Collider
NASA Astrophysics Data System (ADS)
Dong, Zhe
With the Large Hadron Collider (LHC) running at TeV scale, we are expecting to find the deviations from the Standard Model in the experiments, and understanding what is the origin of these deviations. Being the heaviest elementary particle observed so far in the experiments with the mass at the electroweak scale, top quark is a powerful probe for new phenomena of high scale physics at the LHC. Therefore, we concentrate on studying the high scale physics phenomena with top quark pair production or decay at the LHC. In this thesis, we study the discovery potential of string resonances decaying to t/tbar final state, and examine the possibility of observing baryon-number-violating top-quark production or decay, at the LHC. We point out that string resonances for a string scale below 4 TeV can be detected via the t/tbar channel, by reconstructing center-of-mass frame kinematics of the resonances from either the t/tbar semi-leptonic decay or recent techniques of identifying highly boosted tops. For the study of baryon-number-violating processes, by a model independent effective approach and focusing on operators with minimal mass-dimension, we find that corresponding effective coefficients could be directly probed at the LHC already with an integrated luminosity of 1 inverse femtobarns at 7 TeV, and further constrained with 30 (100) inverse femtobarns at 7 (14) TeV.
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
Dar A. Robertsa; Michael Keller; Joao Vianei Soares
2003-01-01
We summarize early research on land-cover, land-use, and biophysical properties of vegetation from the Large Scale Biosphere Atmosphere (LBA) experiment in AmazoËnia. LBA is an international research program developed to evaluate regional function and to determine how land-use and climate modify biological, chemical and physical processes there. Remote sensing has...
2011-11-01
fusion energy -production processes of the particular type of reactor using a lithium (Li) blanket or related alloys such as the Pb-17Li eutectic. As such, tritium breeding is intimately connected with energy production, thermal management, radioactivity management, materials properties, and mechanical structures of any plausible future large-scale fusion power reactor. JASON is asked to examine the current state of scientific knowledge and engineering practice on the physical and chemical bases for large-scale tritium
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Inflation physics from the cosmic microwave background and large scale structure
NASA Astrophysics Data System (ADS)
Abazajian, K. N.; Arnold, K.; Austermann, J.; Benson, B. A.; Bischoff, C.; Bock, J.; Bond, J. R.; Borrill, J.; Buder, I.; Burke, D. L.; Calabrese, E.; Carlstrom, J. E.; Carvalho, C. S.; Chang, C. L.; Chiang, H. C.; Church, S.; Cooray, A.; Crawford, T. M.; Crill, B. P.; Dawson, K. S.; Das, S.; Devlin, M. J.; Dobbs, M.; Dodelson, S.; Doré, O.; Dunkley, J.; Feng, J. L.; Fraisse, A.; Gallicchio, J.; Giddings, S. B.; Green, D.; Halverson, N. W.; Hanany, S.; Hanson, D.; Hildebrandt, S. R.; Hincks, A.; Hlozek, R.; Holder, G.; Holzapfel, W. L.; Honscheid, K.; Horowitz, G.; Hu, W.; Hubmayr, J.; Irwin, K.; Jackson, M.; Jones, W. C.; Kallosh, R.; Kamionkowski, M.; Keating, B.; Keisler, R.; Kinney, W.; Knox, L.; Komatsu, E.; Kovac, J.; Kuo, C.-L.; Kusaka, A.; Lawrence, C.; Lee, A. T.; Leitch, E.; Linde, A.; Linder, E.; Lubin, P.; Maldacena, J.; Martinec, E.; McMahon, J.; Miller, A.; Mukhanov, V.; Newburgh, L.; Niemack, M. D.; Nguyen, H.; Nguyen, H. T.; Page, L.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sehgal, N.; Seljak, U.; Senatore, L.; Sievers, J.; Silverstein, E.; Slosar, A.; Smith, K. M.; Spergel, D.; Staggs, S. T.; Stark, A.; Stompor, R.; Vieregg, A. G.; Wang, G.; Watson, S.; Wollack, E. J.; Wu, W. L. K.; Yoon, K. W.; Zahn, O.; Zaldarriaga, M.
2015-03-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments-the theory of cosmic inflation-and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5 σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation Physics from the Cosmic Microwave Background and Large Scale Structure
NASA Technical Reports Server (NTRS)
Abazajian, K.N.; Arnold,K.; Austermann, J.; Benson, B.A.; Bischoff, C.; Bock, J.; Bond, J.R.; Borrill, J.; Buder, I.; Burke, D.L.;
2013-01-01
Fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments---the theory of cosmic inflation---and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1 of the sky to a depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5-sigma measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B-mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.
Inflation physics from the cosmic microwave background and large scale structure
Abazajian, K. N.; Arnold, K.; Austermann, J.; ...
2014-06-26
Here, fluctuations in the intensity and polarization of the cosmic microwave background (CMB) and the large-scale distribution of matter in the universe each contain clues about the nature of the earliest moments of time. The next generation of CMB and large-scale structure (LSS) experiments are poised to test the leading paradigm for these earliest moments—the theory of cosmic inflation—and to detect the imprints of the inflationary epoch, thereby dramatically increasing our understanding of fundamental physics and the early universe. A future CMB experiment with sufficient angular resolution and frequency coverage that surveys at least 1% of the sky to amore » depth of 1 uK-arcmin can deliver a constraint on the tensor-to-scalar ratio that will either result in a 5σ measurement of the energy scale of inflation or rule out all large-field inflation models, even in the presence of foregrounds and the gravitational lensing B -mode signal. LSS experiments, particularly spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, will complement the CMB effort by improving current constraints on running of the spectral index by up to a factor of four, improving constraints on curvature by a factor of ten, and providing non-Gaussianity constraints that are competitive with the current CMB bounds.« less
Hao, Shijie; Cui, Lishan; Wang, Hua; ...
2016-02-10
Crystals held at ultrahigh elastic strains and stresses may exhibit exceptional physical and chemical properties. Individual metallic nanowires can sustain ultra-large elastic strains of 4-7%. However, retaining elastic strains of such magnitude in kilogram-scale nanowires is challenging. Here, we find that under active load, ~5.6% elastic strain can be achieved in Nb nanowires in a composite material. Moreover, large tensile (2.8%) and compressive (-2.4%) elastic strains can be retained in kilogram-scale Nb nanowires when the composite is unloaded to a free-standing condition. It is then demonstrated that the retained tensile elastic strains of Nb nanowires significantly increase their superconducting transitionmore » temperature and critical magnetic fields, corroborating ab initio calculations based on BCS theory. This free-standing nanocomposite design paradigm opens new avenues for retaining ultra-large elastic strains in great quantities of nanowires and elastic-strain-engineering at industrial scale.« less
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley
2017-04-01
Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.
Astroparticle physics and cosmology.
Mitton, Simon
2006-05-20
Astroparticle physics is an interdisciplinary field that explores the connections between the physics of elementary particles and the large-scale properties of the universe. Particle physicists have developed a standard model to describe the properties of matter in the quantum world. This model explains the bewildering array of particles in terms of constructs made from two or three quarks. Quarks, leptons, and three of the fundamental forces of physics are the main components of this standard model. Cosmologists have also developed a standard model to describe the bulk properties of the universe. In this new framework, ordinary matter, such as stars and galaxies, makes up only around 4% of the material universe. The bulk of the universe is dark matter (roughly 23%) and dark energy (about 73%). This dark energy drives an acceleration that means that the expanding universe will grow ever larger. String theory, in which the universe has several invisible dimensions, might offer an opportunity to unite the quantum description of the particle world with the gravitational properties of the large-scale universe.
Scale-dependent geomorphic responses to active restoration and implications for cutthroat trout
NASA Astrophysics Data System (ADS)
Salant, N.; Miller, S. W.
2009-12-01
The predominant goal of instream habitat restoration is to increase the diversity, density and/or biomass of aquatic organisms through enhanced physical heterogeneity and increased food availability. In physically homogenized systems, habitat restoration is most commonly achieved at the reach-scale through the addition of structures or channel reconfiguration. Despite the completion of over 6,000 restoration projects in the United States, studies of fish responses to habitat restoration have largely produced equivocal results. Paradoxically, restoration monitoring overwhelmingly focuses on fish response without understanding how these responses link to the physical variables being altered and the scale at which geomorphic changes occur. Our study investigates whether instream habitat restoration affects geomorphic conditions at spatial scales relevant to the organism of interest (i.e. the spatial scale of the variables limiting to that organism). We measure the effects of active restoration on geomorphic metrics at three spatial scales (local, unit, and reach) using a before-after-control-impact design in a historically disturbed and heavily managed cutthroat trout stream. Observed trout habitat preferences (for spawning and juvenile/adult residence) are used to identify the limiting physical variables and are compared to the scale of spatially explicit geomorphic responses. Four reaches representing three different stages of restoration (before, one month and one year after) are surveyed for local-scale physical conditions, unit- and reach-scale morphology, resident fish use, and redd locations. Local-scale physical metrics include depth, nearbed and average velocity, overhead cover, particle size, and water quality metrics. Point measurements stratified by morphological unit are used to determine physical variability among unit types. Habitat complexity and availability are assessed at the reach-scale from topographic surveys and unit maps. Our multi-scale, process-based approach evaluates whether a commonly used restoration strategy creates geomorphic heterogeneity at scales relevant to fish diversity and microhabitat utilization, an understanding that will improve the efficiency and success of future restoration projects.
Full-color large-scaled computer-generated holograms for physical and non-physical objects
NASA Astrophysics Data System (ADS)
Matsushima, Kyoji; Tsuchiyama, Yasuhiro; Sonobe, Noriaki; Masuji, Shoya; Yamaguchi, Masahiro; Sakamoto, Yuji
2017-05-01
Several full-color high-definition CGHs are created for reconstructing 3D scenes including real-existing physical objects. The field of the physical objects are generated or captured by employing three techniques; 3D scanner, synthetic aperture digital holography, and multi-viewpoint images. Full-color reconstruction of high-definition CGHs is realized by RGB color filters. The optical reconstructions are presented for verifying these techniques.
Earthquake cycles and physical modeling of the process leading up to a large earthquake
NASA Astrophysics Data System (ADS)
Ohnaka, Mitiyasu
2004-08-01
A thorough discussion is made on what the rational constitutive law for earthquake ruptures ought to be from the standpoint of the physics of rock friction and fracture on the basis of solid facts observed in the laboratory. From this standpoint, it is concluded that the constitutive law should be a slip-dependent law with parameters that may depend on slip rate or time. With the long-term goal of establishing a rational methodology of forecasting large earthquakes, the entire process of one cycle for a typical, large earthquake is modeled, and a comprehensive scenario that unifies individual models for intermediate-and short-term (immediate) forecasts is presented within the framework based on the slip-dependent constitutive law and the earthquake cycle model. The earthquake cycle includes the phase of accumulation of elastic strain energy with tectonic loading (phase II), and the phase of rupture nucleation at the critical stage where an adequate amount of the elastic strain energy has been stored (phase III). Phase II plays a critical role in physical modeling of intermediate-term forecasting, and phase III in physical modeling of short-term (immediate) forecasting. The seismogenic layer and individual faults therein are inhomogeneous, and some of the physical quantities inherent in earthquake ruptures exhibit scale-dependence. It is therefore critically important to incorporate the properties of inhomogeneity and physical scaling, in order to construct realistic, unified scenarios with predictive capability. The scenario presented may be significant and useful as a necessary first step for establishing the methodology for forecasting large earthquakes.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Numerical methods for large-scale, time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1979-01-01
A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.
The biology and polymer physics underlying large-scale chromosome organization.
Sazer, Shelley; Schiessel, Helmut
2018-02-01
Chromosome large-scale organization is a beautiful example of the interplay between physics and biology. DNA molecules are polymers and thus belong to the class of molecules for which physicists have developed models and formulated testable hypotheses to understand their arrangement and dynamic properties in solution, based on the principles of polymer physics. Biologists documented and discovered the biochemical basis for the structure, function and dynamic spatial organization of chromosomes in cells. The underlying principles of chromosome organization have recently been revealed in unprecedented detail using high-resolution chromosome capture technology that can simultaneously detect chromosome contact sites throughout the genome. These independent lines of investigation have now converged on a model in which DNA loops, generated by the loop extrusion mechanism, are the basic organizational and functional units of the chromosome. © 2017 The Authors. Traffic published by John Wiley & Sons Ltd.
Deciphering Dynamical Patterns of Growth Processes
ERIC Educational Resources Information Center
Kolakowska, A.
2009-01-01
Large systems of statistical physics often display properties that are independent of particulars that characterize their microscopic components. Universal dynamical patterns are manifested by the presence of scaling laws, which provides a common insight into governing physics of processes as vastly diverse as, e.g., growth of geological…
ERIC Educational Resources Information Center
Zhang, Ping; Ding, Lin
2013-01-01
This paper reports a cross-grade comparative study of Chinese precollege students' epistemological beliefs about physics by using the Colorado Learning Attitudes Survey about Sciences (CLASS). Our students of interest are middle and high schoolers taking traditional lecture-based physics as a mandatory science course each year from the 8th grade…
ERIC Educational Resources Information Center
Shephard, Roy J.; Trudeau, Francois
2013-01-01
This article offers a brief and personal account of the historical background, implementation and principal findings from the Trois-Rivieres regional project, a large-scale quasi-experimental intervention that tested the impact of providing a daily hour of specialist-taught quality physical education upon the physical and mental development of…
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
Parameterization Interactions in Global Aquaplanet Simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Suselj, Kay; Teixeira, João.
2018-02-01
Global climate simulations rely on parameterizations of physical processes that have scales smaller than the resolved ones. In the atmosphere, these parameterizations represent moist convection, boundary layer turbulence and convection, cloud microphysics, longwave and shortwave radiation, and the interaction with the land and ocean surface. These parameterizations can generate different climates involving a wide range of interactions among parameterizations and between the parameterizations and the resolved dynamics. To gain a simplified understanding of a subset of these interactions, we perform aquaplanet simulations with the global version of the Weather Research and Forecasting (WRF) model employing a range (in terms of properties) of moist convection and boundary layer (BL) parameterizations. Significant differences are noted in the simulated precipitation amounts, its partitioning between convective and large-scale precipitation, as well as in the radiative impacts. These differences arise from the way the subcloud physics interacts with convection, both directly and through various pathways involving the large-scale dynamics and the boundary layer, convection, and clouds. A detailed analysis of the profiles of the different tendencies (from the different physical processes) for both potential temperature and water vapor is performed. While different combinations of convection and boundary layer parameterizations can lead to different climates, a key conclusion of this study is that similar climates can be simulated with model versions that are different in terms of the partitioning of the tendencies: the vertically distributed energy and water balances in the tropics can be obtained with significantly different profiles of large-scale, convection, and cloud microphysics tendencies.
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
Analytic prediction of baryonic effects from the EFT of large scale structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu
2015-05-01
The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Continuous data assimilation for downscaling large-footprint soil moisture retrievals
NASA Astrophysics Data System (ADS)
Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.
2016-10-01
Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.
Neutrino footprint in large scale structure
NASA Astrophysics Data System (ADS)
Garay, Carlos Peña; Verde, Licia; Jimenez, Raul
2017-03-01
Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.
Groups of galaxies in the Center for Astrophysics redshift survey
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1989-01-01
By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.
Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.
Kusenko, Alexander; Pearce, Lauren; Yang, Louis
2015-02-13
The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.
USDA-ARS?s Scientific Manuscript database
Background: As the population of older adults continues to increase, the dissemination of strategies to maintain independence of older persons is of critical public health importance. Recent large-scale clinical trial evidence has definitively shown intervention of moderate-intensity physical activi...
Astronomy Demonstrations and Models.
ERIC Educational Resources Information Center
Eckroth, Charles A.
Demonstrations in astronomy classes seem to be more necessary than in physics classes for three reasons. First, many of the events are very large scale and impossibly remote from human senses. Secondly, while physics courses use discussions of one- and two-dimensional motion, three-dimensional motion is the normal situation in astronomy; thus,…
NASA Astrophysics Data System (ADS)
Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael
2015-06-01
Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.
NASA Astrophysics Data System (ADS)
Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry
2015-11-01
Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.
NASA Technical Reports Server (NTRS)
Britcher, C. P.
1983-01-01
Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
From Lattice Boltzmann to hydrodynamics in dissipative relativistic fluids
NASA Astrophysics Data System (ADS)
Gabbana, Alessandro; Mendoza, Miller; Succi, Sauro; Tripiccione, Raffaele
2017-11-01
Relativistic fluid dynamics is currently applied to several fields of modern physics, covering many physical scales, from astrophysics, to atomic scales (e.g. in the study of effective 2D systems such as graphene) and further down to subnuclear scales (e.g. quark-gluon plasmas). This talk focuses on recent progress in the largely debated connection between kinetic transport coefficients and macroscopic hydrodynamic parameters in dissipative relativistic fluid dynamics. We use a new relativistic Lattice Boltzmann method (RLBM), able to handle from ultra-relativistic to almost non-relativistic flows, and obtain strong evidence that the Chapman-Enskog expansion provides the correct pathway from kinetic theory to hydrodynamics. This analysis confirms recently obtained theoretical results, which can be used to obtain accurate calibrations for RLBM methods applied to realistic physics systems in the relativistic regime. Using this calibration methodology, RLBM methods are able to deliver improved physical accuracy in the simulation of the physical systems described above. European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 642069.
NASA Astrophysics Data System (ADS)
Hristova-Veleva, S.; Chao, Y.; Vane, D.; Lambrigtsen, B.; Li, P. P.; Knosp, B.; Vu, Q. A.; Su, H.; Dang, V.; Fovell, R.; Tanelli, S.; Garay, M.; Willis, J.; Poulsen, W.; Fishbein, E.; Ao, C. O.; Vazquez, J.; Park, K. J.; Callahan, P.; Marcus, S.; Haddad, Z.; Fetzer, E.; Kahn, R.
2007-12-01
In spite of recent improvements in hurricane track forecast accuracy, currently there are still many unanswered questions about the physical processes that determine hurricane genesis, intensity, track and impact on large- scale environment. Furthermore, a significant amount of work remains to be done in validating hurricane forecast models, understanding their sensitivities and improving their parameterizations. None of this can be accomplished without a comprehensive set of multiparameter observations that are relevant to both the large- scale and the storm-scale processes in the atmosphere and in the ocean. To address this need, we have developed a prototype of a comprehensive hurricane information system of high- resolution satellite, airborne and in-situ observations and model outputs pertaining to: i) the thermodynamic and microphysical structure of the storms; ii) the air-sea interaction processes; iii) the larger-scale environment as depicted by the SST, ocean heat content and the aerosol loading of the environment. Our goal was to create a one-stop place to provide the researchers with an extensive set of observed hurricane data, and their graphical representation, together with large-scale and convection-resolving model output, all organized in an easy way to determine when coincident observations from multiple instruments are available. Analysis tools will be developed in the next step. The analysis tools will be used to determine spatial, temporal and multiparameter covariances that are needed to evaluate model performance, provide information for data assimilation and characterize and compare observations from different platforms. We envision that the developed hurricane information system will help in the validation of the hurricane models, in the systematic understanding of their sensitivities and in the improvement of the physical parameterizations employed by the models. Furthermore, it will help in studying the physical processes that affect hurricane development and impact on large-scale environment. This talk will describe the developed prototype of the hurricane information systems. Furthermore, we will use a set of WRF hurricane simulations and compare simulated to observed structures to illustrate how the information system can be used to discriminate between simulations that employ different physical parameterizations. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics ans Space Administration.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Yan; Shao, Hao; Zhong, Yaozhao; Zhang, Sai; Zhao, Zongxi
2012-06-01
Band structure and wave localization are investigated for sea surface water waves over large-scale sand wave topography. Sand wave height, sand wave width, water depth, and water width between adjacent sand waves have significant impact on band gaps. Random fluctuations of sand wave height, sand wave width, and water depth induce water wave localization. However, random water width produces a perfect transmission tunnel of water waves at a certain frequency so that localization does not occur no matter how large a disorder level is applied. Together with theoretical results, the field experimental observations in the Taiwan Bank suggest band gap and wave localization as the physical mechanism of sea surface water wave propagating over natural large-scale sand waves.
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
ERIC Educational Resources Information Center
Michael, William B.; And Others
1975-01-01
The scale yielded three major dimensions that were essentially invariant across the three samples: physical appearance, socially unacceptable (bad) behavior, and academic or school status. (Author/RC)
USDA-ARS?s Scientific Manuscript database
Climate models predict increased variability in precipitation regimes, which will likely increase frequency/duration of drought. Reductions in soil moisture affect physical and chemical characteristics of the soil habitat and can influence soil organisms such as mites and nematodes. These organisms ...
Use of Second Generation Coated Conductors for Efficient Shielding of dc Magnetic Fields (Postprint)
2010-07-15
layer of superconducting film, can attenuate an external magnetic field of up to 5 mT by more than an order of magnitude. For comparison purposes...appears to be especially promising for the realization of large scale high-Tc superconducting screens. 15. SUBJECT TERMS magnetic screens, current...realization of large scale high-Tc superconducting screens. © 2010 American Institute of Physics. doi:10.1063/1.3459895 I. INTRODUCTION Magnetic screening
Numerical dissipation vs. subgrid-scale modelling for large eddy simulation
NASA Astrophysics Data System (ADS)
Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos
2017-05-01
This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.
Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj
2014-01-01
Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082
NASA Astrophysics Data System (ADS)
Coon, E.; Jan, A.; Painter, S. L.; Moulton, J. D.; Wilson, C. J.
2017-12-01
Many permafrost-affected regions in the Arctic manifest a polygonal patterned ground, which contains large carbon stores and is vulnerability to climate change as warming temperatures drive melting ice wedges, polygon degradation, and thawing of the underlying carbon-rich soils. Understanding the fate of this carbon is difficult. The system is controlled by complex, nonlinear physics coupling biogeochemistry, thermal-hydrology, and geomorphology, and there is a strong spatial scale separation between microtopograpy (at the scale of an individual polygon) and the scale of landscape change (at the scale of many thousands of polygons). Physics-based models have come a long way, and are now capable of representing the diverse set of processes, but only on individual polygons or a few polygons. Empirical models have been used to upscale across land types, including ecotypes evolving from low-centered (pristine) polygons to high-centered (degraded) polygon, and do so over large spatial extent, but are limited in their ability to discern causal process mechanisms. Here we present a novel strategy that looks to use physics-based models across scales, bringing together multiple capabilities to capture polygon degradation under a warming climate and its impacts on thermal-hydrology. We use fine-scale simulations on individual polygons to motivate a mixed-dimensional strategy that couples one-dimensional columns representing each individual polygon through two-dimensional surface flow. A subgrid model is used to incorporate the effects of surface microtopography on surface flow; this model is described and calibrated to fine-scale simulations. And critically, a subsidence model that tracks volume loss in bulk ice wedges is used to alter the subsurface structure and subgrid parameters, enabling the inclusion of the feedbacks associated with polygon degradation. This combined strategy results in a model that is able to capture the key features of polygon permafrost degradation, but in a simulation across a large spatial extent of polygonal tundra.
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fosalba, Pablo; Dore, Olivier
2007-11-15
Cross correlation between the cosmic microwave background (CMB) and large-scale structure is a powerful probe of dark energy and gravity on the largest physical scales. We introduce a novel estimator, the CMB-velocity correlation, that has most of its power on large scales and that, at low redshift, delivers up to a factor of 2 higher signal-to-noise ratio than the recently detected CMB-dark matter density correlation expected from the integrated Sachs-Wolfe effect. We propose to use a combination of peculiar velocities measured from supernovae type Ia and kinetic Sunyaev-Zeldovich cluster surveys to reveal this signal and forecast dark energy constraints thatmore » can be achieved with future surveys. We stress that low redshift peculiar velocity measurements should be exploited with complementary deeper large-scale structure surveys for precision cosmology.« less
Cosmic homogeneity: a spectroscopic and model-independent measurement
NASA Astrophysics Data System (ADS)
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
NASA Astrophysics Data System (ADS)
Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.
2013-12-01
Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.
Strain localization in models and nature: bridging the gaps.
NASA Astrophysics Data System (ADS)
Burov, E.; Francois, T.; Leguille, J.
2012-04-01
Mechanisms of strain localization and their role in tectonic evolution are still largely debated. Indeed, the laboratory data on strain localization processes are not abundant, they do not cover the entire range of possible mechanisms and have to be extrapolated, sometimes with greatest uncertainties, to geological scales while the observations of localization processes at outcrop scale are scarce, not always representative, and usually are difficult to quantify. Numerical thermo-mechanical models allow us to investigate the relative importance of some of the localization processes whether they are hypothesized or observed at laboratory or outcrop scale. The numerical models can test different observationally or analytically derived laws in terms of their applicability to natural scales and tectonic processes. The models are limited, however, in their capacity of reproduction of physical mechanisms, and necessary simplify the softening laws leading to "numerical" localization. Numerical strain localization is also limited by grid resolution and the ability of specific numerical codes to handle large strains and the complexity of the associated physical phenomena. Hence, multiple iterations between observations and models are needed to elucidate the causes of strain localization in nature. We here investigate the relative impact of different weakening laws on localization of deformation using large-strain thermo-mechanical models. We test using several "generic" rifting and collision settings, the implications of structural softening, tectonic heritage, shear heating, friction angle and cohesion softening, ductile softening (mimicking grain-size reduction) as well as of a number of other mechanisms such as fluid-assisted phase changes. The results suggest that different mechanisms of strain localization may interfere in nature, yet it most cases it is not evident to establish quantifiable links between the laboratory data and the best-fitting parameters of the effective softening laws that allow to reproduce large scale tectonic evolution. For example, one of most effective and widely used mechanisms of "numerical" strain localization is friction angle softening. Yet, namely this law appears to be most difficult to justify from physical and observational grounds.
Unsuppressed primordial standard clocks in warm quasi-single field inflation
NASA Astrophysics Data System (ADS)
Tong, Xi; Wang, Yi; Zhou, Siyi
2018-06-01
We study the non-Gaussianities in quasi-single field inflation with a warm inflation background. The thermal effects at small scales can sufficiently enhance the magnitude of the primordial standard clock signal. This scenario offers us the possibility of probing the UV physics of the very early universe without the exponentially small Boltzmann factor when the mass of the isocurvaton is much heavier than Hubble. The thermal effects at small scales can be studied using the flat space thermal field theory, connected to an effective description using non-Bunch-Davies vacuum at large scales, with large clock signal.
Perspectives on integrated modeling of transport processes in semiconductor crystal growth
NASA Technical Reports Server (NTRS)
Brown, Robert A.
1992-01-01
The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.
Investigation of rock samples by neutron diffraction and ultrasonic sounding
NASA Astrophysics Data System (ADS)
Burilichev, D. E.; Ivankina, T. I.; Klima, K.; Locajicek, T.; Nikitin, A. N.; Pros, Z.
2000-03-01
The interpretation of large-scale geophysical anisotropies largely depends upon the knowledge of rock anisotropies of any kind (compositions, foliations, grain shape, physical properties). Almost all physical rock properties (e.g. elastic, thermal, magnetic properties) are related to the textures of the rock constituents since they are anisotropic for the single crystal. Although anisotropy determinations are numerous, systematic investigations are scarce. Therefore, several rock samples with different microfabrics were selected for texture analysis and to determine its P-wave distributions at various confining pressures.
Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.
Natural inflation with pseudo Nambu-Goldstone bosons
NASA Technical Reports Server (NTRS)
Freese, Katherine; Frieman, Joshua A.; Olinto, Angela V.
1990-01-01
It is shown that a pseudo-Nambu-Goldstone boson of given potential can naturally give rise to an epoch of inflation in the early universe. Mass scales which arise in particle physics models with a gauge group that becomes strongly interacting at a certain scales are shown to be conditions for successful inflation. The density fluctuation spectrum is nonscale-invariant, with extra power on large length scales.
Physics and biochemical engineering: 3
NASA Astrophysics Data System (ADS)
Fairbrother, Robert; Riddle, Wendy; Fairbrother, Neil
2006-09-01
Once an antibiotic has been produced on a large scale, as described in our preceding articles, it has to be extracted and purified. Filtration and centrifugation are the two main ways of doing this, and the design of industrial processing systems is governed by simple physics involving factors such as pressure, viscosity and rotational motion.
2011-09-30
and easy to apply in large-scale physical-biogeochemical simulations. We also collaborate with Dr. Curt Mobley at Sequoia Scientific for the second...we are collaborating with Dr. Curtis Mobley of Sequoia Scientific on improving the link between the radiative transfer model (EcoLight) within the
ERIC Educational Resources Information Center
Wilcox, Bethany R.; Pollock, Steven J.
2014-01-01
Free-response research-based assessments, like the Colorado Upper-division Electrostatics Diagnostic (CUE), provide rich, fine-grained information about students' reasoning. However, because of the difficulties inherent in scoring these assessments, the majority of the large-scale conceptual assessments in physics are multiple choice. To increase…
Physical Aggression in Children and Adolescents with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Mazurek, Micah O.; Kanne, Stephen M.; Wodka, Ericka L.
2013-01-01
Aggression is a clinically significant problem for many children and adolescents with autism spectrum disorders (ASD). However, there have been few large-scale studies addressing this issue. The current study examined the prevalence and correlates of physical aggression in a sample of 1584 children and adolescents with ASD enrolled in the Autism…
Lumped Parameter Models for Predicting Nitrogen Transport in Lower Coastal Plain Watersheds
Devendra M. Amatya; George M. Chescheir; Glen P. Fernandez; R. Wayne Skaggs; F. Birgand; J.W. Gilliam
2003-01-01
hl recent years physically based comprehensive disfributed watershed scale hydrologic/water quality models have been developed and applied 10 evaluate cumulative effects of land arld water management practices on receiving waters, Although fhesc complex physically based models are capable of simulating the impacts ofthese changes in large watersheds, they are often...
ERIC Educational Resources Information Center
Mashood, K. K.; Singh, Vijay A.
2013-01-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…
Depressive Symptoms Negate the Beneficial Effects of Physical Activity on Mortality Risk
ERIC Educational Resources Information Center
Lee, Pai-Lin
2013-01-01
The aim of this study is to: (1) compare the association between various levels of physical activity (PA) and mortality; and (2) examine the potential modifying effect of depressive symptoms on the PA-mortality associations. Previous large scale randomized studies rarely assess the association in conjunction with modifying effects of depressive…
Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís
2013-10-25
The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.
Caldwell, Robert R
2011-12-28
The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.
Progress report for a research program in theoretical high energy physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, D.; Fried, H.M.; Jevicki, A.
This year's research has dealt with: superstrings in the early universe; the invisible axion emissions from SN1987A; quartic interaction in Witten's superstring field theory; W-boson associated multiplicity and the dual parton model; cosmic strings and galaxy formation; cosmic strings and baryogenesis; quark flavor mixing; p -- /bar p/ scattering at TeV energies; random surfaces; ordered exponentials and differential equations; initial value and back-reaction problems in quantum field theory; string field theory and Weyl invariance; the renormalization group and string field theory; the evolution of scalar fields in an inflationary universe, with and without the effects of gravitational perturbations; cosmic stringmore » catalysis of skyrmion decay; inflation and cosmic strings from dynamical symmetry breaking; the physic of flavor mixing; string-inspired cosmology; strings at high-energy densities and complex temperatures; the problem of non-locality in string theory; string statistical mechanics; large-scale structures with cosmic strings and neutrinos; the delta expansion for stochastic quantization; high-energy neutrino flux from ordinary cosmic strings; a physical picture of loop bremsstrahlung; cylindrically-symmetric solutions of four-dimensional sigma models; large-scale structure with hot dark matter and cosmic strings; the unitarization of the odderon; string thermodynamics and conservation laws; the dependence of inflationary-universe models on initial conditions; the delta expansion and local gauge invariance; particle physics and galaxy formation; chaotic inflation with metric and matter perturbations; grand-unified theories, galaxy formation, and large-scale structure; neutrino clustering in cosmic-string-induced wakes; and infrared approximations to nonlinear differential equations. 17 refs.« less
Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions.
Delaforge, Elise; Milles, Sigrid; Huang, Jie-Rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J; Blackledge, Martin
2016-01-01
Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales.
Investigating the Role of Large-Scale Domain Dynamics in Protein-Protein Interactions
Delaforge, Elise; Milles, Sigrid; Huang, Jie-rong; Bouvier, Denis; Jensen, Malene Ringkjøbing; Sattler, Michael; Hart, Darren J.; Blackledge, Martin
2016-01-01
Intrinsically disordered linkers provide multi-domain proteins with degrees of conformational freedom that are often essential for function. These highly dynamic assemblies represent a significant fraction of all proteomes, and deciphering the physical basis of their interactions represents a considerable challenge. Here we describe the difficulties associated with mapping the large-scale domain dynamics and describe two recent examples where solution state methods, in particular NMR spectroscopy, are used to investigate conformational exchange on very different timescales. PMID:27679800
The New Big Science at the NSLS
NASA Astrophysics Data System (ADS)
Crease, Robert
2016-03-01
The term ``New Big Science'' refers to a phase shift in the kind of large-scale science that was carried out throughout the U.S. National Laboratory system, when large-scale materials science accelerators rather than high-energy physics accelerators became marquee projects at most major basic research laboratories in the post-Cold War era, accompanied by important changes in the character and culture of the research ecosystem at these laboratories. This talk explores some aspects of this phase shift at BNL's National Synchrotron Light Source.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
Vance, Tiffany C; Doel, Ronald E
2010-01-01
In the last quarter of the twentieth century, an innovative three-dimensional graphical technique was introduced into biological oceanography and ecology, where it spread rapidly. Used to improve scientists' understanding of the importance of scale within oceanic ecosystems, this influential diagram addressed biological scales from phytoplankton to fish, physical scales from diurnal tides to ocean currents, and temporal scales from hours to ice ages. Yet the Stommel Diagram (named for physical oceanographer Henry Stommel, who created it in 1963) had not been devised to aid ecological investigations. Rather, Stommel intended it to help plan large-scale research programs in physical oceanography, particularly as Cold War research funding enabled a dramatic expansion of physical oceanography in the 1960s. Marine ecologists utilized the Stommel Diagram to enhance research on biological production in ocean environments, a key concern by the 1970s amid growing alarm about overfishing and ocean pollution. Before the end of the twentieth century, the diagram had become a significant tool within the discipline of ecology. Tracing the path that Stommel's graphical techniques traveled from the physical to the biological environmental sciences reveals a great deal about practices in these distinct research communities and their relative professional and institutional standings in the Cold War era. Crucial to appreciating the course of that path is an understanding of the divergent intellectual and social contexts of the physical versus the biological environmental sciences.
2011-08-01
design space is large. His research contributions are to the field of Decision-based Design, specifically in linking consumer preferences and...Integrating Consumer Preferences into Engineering Design, to be published in 2012. He received his PhD from Northwestern University in Mechanical
Influence of the Biosphere on Precipitation: July 1995 Studies with the ARM-CART Data
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Mocko, D. M.; Walker, G. K.; Koster, Randal D.
2000-01-01
Ensemble sets of simulation experiments were conducted with a single column model (SCM) using the Goddard GEOS II GCM physics containing a recent version of the Cumulus Scheme (McRAS) and a biosphere based land-fluxes scheme (SSiB). The study used the 18 July to 5 August 1995 ARM-CART (Atmospheric Radiation Measurement-Cloud Atmospheric Radiation Test-bed) data, which was collected at the ARM-CART site in the mid-western United States and analyzed for single column modeling (SCM) studies. The new findings affirm the earlier findings that the vegetation, which increases the solar energy absorption at the surface together with soil and soil-moisture dependent processes, which modulate the surface, fluxes (particularly evapotranspiration) together help to increase the local rainfall. In addition, the results also show that for the particular study period roughly 50% of the increased evaporation over the ARM-CART site would be converted into rainfall with the Column, while the remainder would be advected out to the large-scale. Notwithstanding the limitations of only one-way interaction (i.e., the large-scale influencing the regional physics and not vice versa), the current SCM simulations show a very robust relationship. The evaporation-precipitation relationship turns out to be independent of the soil types, and soil moisture; however, it is weakly dependent on the vegetation cover because of its surface-albedo effect. Clearly, these inferences are prone to weaknesses of the SCM physics, the assumptions of the large-scale being unaffected by gridscale (SCM-scale) changes in moist processes, and other limitations of the evaluation procedures.
Examining Chaotic Convection with Super-Parameterization Ensembles
NASA Astrophysics Data System (ADS)
Jones, Todd R.
This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.
Dissecting the large-scale galactic conformity
NASA Astrophysics Data System (ADS)
Seo, Seongu
2018-01-01
Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.
NASA Technical Reports Server (NTRS)
Crawford, D. A.; Barnouin-Jha, O. S.; Cintala, M. J.
2003-01-01
The propagation of shock waves through target materials is strongly influenced by the presence of small-scale structure, fractures, physical and chemical heterogeneities. Pre-existing fractures often create craters that appear square in outline (e.g. Meteor Crater). Reverberations behind the shock from the presence of physical heterogeneity have been proposed as a mechanism for transient weakening of target materials. Pre-existing fractures can also affect melt generation. In this study, we are attempting to bridge the gap in numerical modeling between the micro-scale and the continuum, the so-called meso-scale. To accomplish this, we are developing a methodology to be used in the shock physics hydrocode (CTH) using Monte-Carlo-type methods to investigate the shock properties of heterogeneous materials. By comparing the results of numerical experiments at the micro-scale with experimental results and by using statistical techniques to evaluate the performance of simple constitutive models, we hope to embed the effect of physical heterogeneity into the field variables (pressure, stress, density, velocity) allowing us to directly imprint the effects of micro-scale heterogeneity at the continuum level without incurring high computational cost.
What Sort of Girl Wants to Study Physics after the Age of 16? Findings from a Large-Scale UK Survey
ERIC Educational Resources Information Center
Mujtaba, Tamjid; Reiss, Michael J.
2013-01-01
This paper investigates the characteristics of 15-year-old girls who express an intention to study physics post-16. This paper unpacks issues around within-girl group differences and similarities between boys and girls in survey responses about physics. The analysis is based on the year 10 (age 15 years) responses of 5,034 students from 137 UK…
Charter Operators Spell Out Barriers to "Scaling Up"
ERIC Educational Resources Information Center
Zehr, Mary Ann
2011-01-01
The pace at which the highest-performing charter-management organizations (CMOs) are "scaling up" is being determined largely by how rapidly they can develop and hire strong leaders and acquire physical space, and by the level of support they receive for growth from city or state policies, say leaders from some charter organizations…
NASA Astrophysics Data System (ADS)
McMillan, Mitchell; Hu, Zhiyong
2017-10-01
Streambank erosion is a major source of fluvial sediment, but few large-scale, spatially distributed models exist to quantify streambank erosion rates. We introduce a spatially distributed model for streambank erosion applicable to sinuous, single-thread channels. We argue that such a model can adequately characterize streambank erosion rates, measured at the outsides of bends over a 2-year time period, throughout a large region. The model is based on the widely-used excess-velocity equation and comprised three components: a physics-based hydrodynamic model, a large-scale 1-dimensional model of average monthly discharge, and an empirical bank erodibility parameterization. The hydrodynamic submodel requires inputs of channel centerline, slope, width, depth, friction factor, and a scour factor A; the large-scale watershed submodel utilizes watershed-averaged monthly outputs of the Noah-2.8 land surface model; bank erodibility is based on tree cover and bank height as proxies for root density. The model was calibrated with erosion rates measured in sand-bed streams throughout the northern Gulf of Mexico coastal plain. The calibrated model outperforms a purely empirical model, as well as a model based only on excess velocity, illustrating the utility of combining a physics-based hydrodynamic model with an empirical bank erodibility relationship. The model could be improved by incorporating spatial variability in channel roughness and the hydrodynamic scour factor, which are here assumed constant. A reach-scale application of the model is illustrated on ∼1 km of a medium-sized, mixed forest-pasture stream, where the model identifies streambank erosion hotspots on forested and non-forested bends.
Future sensitivity to new physics in Bd, Bs, and K mixings
NASA Astrophysics Data System (ADS)
Charles, Jérôme; Descotes-Genon, Sébastien; Ligeti, Zoltan; Monteil, Stéphane; Papucci, Michele; Trabelsi, Karim
2014-02-01
We estimate, in a large class of scenarios, the sensitivity to new physics in Bd and Bs mixings achievable with 50 ab-1 of Belle II and 50 fb-1 of LHCb data. We find that current limits on new physics contributions in both Bd ,s systems can be improved by a factor of ˜5 for all values of the CP-violating phases, corresponding to over a factor of 2 increase in the scale of new physics probed. Assuming the same suppressions by Cabbibo-Kobayashi-Maskawa matrix elements as those of the standard model box diagrams, the scale probed will be about 20 TeV for tree-level new physics contributions, and about 2 TeV for new physics arising at one loop. We also explore the future sensitivity to new physics in K mixing. Implications for generic new physics and for various specific scenarios, such as minimal flavor violation, light third-generation dominated flavor violation, or U(2) flavor models are studied.
Physics of chewing in terrestrial mammals.
Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan
2017-03-07
Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.
The Evolution of Soft Collinear Effective Theory
Lee, Christopher
2015-02-25
Soft Collinear Effective Theory (SCET) is an effective field theory of Quantum Chromodynamics (QCD) for processes where there are energetic, nearly lightlike degrees of freedom interacting with one another via soft radiation. SCET has found many applications in high-energy and nuclear physics, especially in recent years the physics of hadronic jets in e +e -, lepton-hadron, hadron-hadron, and heavy-ion collisions. SCET can be used to factorize multi-scale cross sections in these processes into single-scale hard, collinear, and soft functions, and to evolve these through the renormalization group to resum large logarithms of ratios of the scales that appear in themore » QCD perturbative expansion, as well as to study properties of nonperturbative effects. We overview the elementary concepts of SCET and describe how they can be applied in high-energy and nuclear physics.« less
Physics of chewing in terrestrial mammals
NASA Astrophysics Data System (ADS)
Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan
2017-03-01
Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.
Plasma physics of extreme astrophysical environments.
Uzdensky, Dmitri A; Rightley, Shane
2014-03-01
Among the incredibly diverse variety of astrophysical objects, there are some that are characterized by very extreme physical conditions not encountered anywhere else in the Universe. Of special interest are ultra-magnetized systems that possess magnetic fields exceeding the critical quantum field of about 44 TG. There are basically only two classes of such objects: magnetars, whose magnetic activity is manifested, e.g., via their very short but intense gamma-ray flares, and central engines of supernovae (SNe) and gamma-ray bursts (GRBs)--the most powerful explosions in the modern Universe. Figuring out how these complex systems work necessarily requires understanding various plasma processes, both small-scale kinetic and large-scale magnetohydrodynamic (MHD), that govern their behavior. However, the presence of an ultra-strong magnetic field modifies the underlying basic physics to such a great extent that relying on conventional, classical plasma physics is often not justified. Instead, plasma-physical problems relevant to these extreme astrophysical environments call for constructing relativistic quantum plasma (RQP) physics based on quantum electrodynamics (QED). In this review, after briefly describing the astrophysical systems of interest and identifying some of the key plasma-physical problems important to them, we survey the recent progress in the development of such a theory. We first discuss the ways in which the presence of a super-critical field modifies the properties of vacuum and matter and then outline the basic theoretical framework for describing both non-relativistic and RQPs. We then turn to some specific astrophysical applications of relativistic QED plasma physics relevant to magnetar magnetospheres and to central engines of core-collapse SNe and long GRBs. Specifically, we discuss the propagation of light through a magnetar magnetosphere; large-scale MHD processes driving magnetar activity and responsible for jet launching and propagation in GRBs; energy-transport processes governing the thermodynamics of extreme plasma environments; micro-scale kinetic plasma processes important in the interaction of intense electric currents flowing through a magnetar magnetosphere with the neutron star surface; and magnetic reconnection of ultra-strong magnetic fields. Finally, we point out that future progress in applying RQP physics to real astrophysical problems will require the development of suitable numerical modeling capabilities.
Organization and scaling in water supply networks
NASA Astrophysics Data System (ADS)
Cheng, Likwan; Karney, Bryan W.
2017-12-01
Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.
Should we trust build-up/wash-off water quality models at the scale of urban catchments?
Bonhomme, Céline; Petrucci, Guido
2017-01-01
Models of runoff water quality at the scale of an urban catchment usually rely on build-up/wash-off formulations obtained through small-scale experiments. Often, the physical interpretation of the model parameters, valid at the small-scale, is transposed to large-scale applications. Testing different levels of spatial variability, the parameter distributions of a water quality model are obtained in this paper through a Monte Carlo Markov Chain algorithm and analyzed. The simulated variable is the total suspended solid concentration at the outlet of a periurban catchment in the Paris region (2.3 km 2 ), for which high-frequency turbidity measurements are available. This application suggests that build-up/wash-off models applied at the catchment-scale do not maintain their physical meaning, but should be considered as "black-box" models. Copyright © 2016 Elsevier Ltd. All rights reserved.
a Model Study of Small-Scale World Map Generalization
NASA Astrophysics Data System (ADS)
Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.
2018-04-01
With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.
Abbott, J Haxby; Schmitt, John
2014-08-01
Multicenter, prospective, longitudinal cohort study. To investigate the minimum important difference (MID) of the Patient-Specific Functional Scale (PSFS), 4 region-specific outcome measures, and the numeric pain rating scale (NPRS) across 3 levels of patient-perceived global rating of change in a clinical setting. The MID varies depending on the external anchor defining patient-perceived "importance." The MID for the PSFS has not been established across all body regions. One thousand seven hundred eight consecutive patients with musculoskeletal disorders were recruited from 5 physical therapy clinics. The PSFS, NPRS, and 4 region-specific outcome measures-the Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale-were assessed at the initial and final physical therapy visits. Global rating of change was assessed at the final visit. MID was calculated for the PSFS and NPRS (overall and for each body region), and for each region-specific outcome measure, across 3 levels of change defined by the global rating of change (small, medium, large change) using receiver operating characteristic curve methodology. The MID for the PSFS (on a scale from 0 to 10) ranged from 1.3 (small change) to 2.3 (medium change) to 2.7 (large change), and was relatively stable across body regions. MIDs for the NPRS (-1.5 to -3.5), Oswestry Disability Index (-12), Neck Disability Index (-14), Upper Extremity Functional Index (6 to 11), and Lower Extremity Functional Scale (9 to 16) are also reported. We reported the MID for small, medium, and large patient-perceived change on the PSFS, NPRS, Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale for use in clinical practice and research.
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
Trompier, François; Burbidge, Christopher; Bassinet, Céline; Baumann, Marion; Bortolin, Emanuela; De Angelis, Cinzia; Eakins, Jonathan; Della Monaca, Sara; Fattibene, Paola; Quattrini, Maria Cristina; Tanner, Rick; Wieser, Albrecht; Woda, Clemens
2017-01-01
In the EC-funded project RENEB (Realizing the European Network in Biodosimetry), physical methods applied to fortuitous dosimetric materials are used to complement biological dosimetry, to increase dose assessment capacity for large-scale radiation/nuclear accidents. This paper describes the work performed to implement Optically Stimulated Luminescence (OSL) and Electron Paramagnetic Resonance (EPR) dosimetry techniques. OSL is applied to electronic components and EPR to touch-screen glass from mobile phones. To implement these new approaches, several blind tests and inter-laboratory comparisons (ILC) were organized for each assay. OSL systems have shown good performances. EPR systems also show good performance in controlled conditions, but ILC have also demonstrated that post-irradiation exposure to sunlight increases the complexity of the EPR signal analysis. Physically-based dosimetry techniques present high capacity, new possibilities for accident dosimetry, especially in the case of large-scale events. Some of the techniques applied can be considered as operational (e.g. OSL on Surface Mounting Devices [SMD]) and provide a large increase of measurement capacity for existing networks. Other techniques and devices currently undergoing validation or development in Europe could lead to considerable increases in the capacity of the RENEB accident dosimetry network.
Baby, André Rolim; Santoro, Diego Monegatto; Velasco, Maria Valéria Robles; Dos Reis Serra, Cristina Helena
2008-09-01
Introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3], mum) and rheology profile. Transposition occurred from a batch of 500-50,000g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic.
Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.
Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu
2009-02-06
We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.
NASA Astrophysics Data System (ADS)
Flinchum, B. A.; Holbrook, W. S.; Grana, D.; Parsekian, A.; Carr, B.; Jiao, J.
2017-12-01
Porosity is generated by chemical, physical and biological processes that work to transform bedrock into soil. The resulting porosity structure can provide specifics about these processes and can improve understanding groundwater storage in the deep critical zone. Near-surface geophysical methods, when combined with rock physics and drilling, can be a tool used to map porosity over large spatial scales. In this study, we estimate porosity in three-dimensions (3D) across a 58 Ha granite catchment. Observations focus on seismic refraction, downhole nuclear magnetic resonance logs, downhole sonic logs, and samples of core acquired by push coring. We use a novel petrophysical approach integrating two rock physics models, a porous medium for the saprolite and a differential effective medium for the fractured rock, that drive a Bayesian inversion to calculate porosity from seismic velocities. The inverted geophysical porosities are within about 0.05 m3/m3 of lab measured values. We extrapolate the porosity estimates below seismic refraction lines to a 3D volume using ordinary kriging to map the distribution of porosity in 3D up to depths of 80 m. This study provides a unique map of porosity on scale never-before-seen in critical zone science. Estimating porosity on these large spatial scales opens the door for improving and understanding the processes that shape the deep critical zone.
ERIC Educational Resources Information Center
Jacobs, D. J.
1988-01-01
This article describes the basic physics of several types of holograms and discusses different recording materials in use. Current and possible future applications of holograms are described as well as their large-scale production. (Author)
Teaching Physics Novices at University: A Case for Stronger Scaffolding
ERIC Educational Resources Information Center
Lindstrom, Christine; Sharma, Manjula D.
2011-01-01
In 2006 a new type of tutorial, called Map Meeting, was successfully trialled with novice first year physics students at the University of Sydney, Australia. Subsequently, in first semester 2007 a large-scale experiment was carried out with 262 students who were allocated either to the strongly scaffolding Map Meetings or to the less scaffolding…
Teachers as Agents of Change in Curricular Reform: The Position of Dance Revisited
ERIC Educational Resources Information Center
MacLean, Justine
2018-01-01
This paper reports findings from a recent large-scale survey of Physical Education (PE) teachers' perceptions of teaching dance and compares them to results of a study completed 10 years previously [MacLean, J. (2007). A longitudinal study to ascertain the factors that impact on the confidence of undergraduate physical education student teachers…
Evaluating crown fire rate of spread predictions from physics-based models
C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont
2015-01-01
Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...
ERIC Educational Resources Information Center
McCaughtry, Nate; Martin, Jeffrey; Kulinna, Pamela Hodges; Cothran, Donetta
2006-01-01
The purpose of this study was to understand factors that make teacher professional development successful and what success might mean in terms of teachers' instructional practices and feelings about change. Specifically, this study focused on the impact of instructional resources on the large-scale curricular reform of 30 urban physical education…
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Natural tuning: towards a proof of concept
NASA Astrophysics Data System (ADS)
Dubovsky, Sergei; Gorbenko, Victor; Mirbabayi, Mehrdad
2013-09-01
The cosmological constant problem and the absence of new natural physics at the electroweak scale, if confirmed by the LHC, may either indicate that the nature is fine-tuned or that a refined notion of naturalness is required. We construct a family of toy UV complete quantum theories providing a proof of concept for the second possibility. Low energy physics is described by a tuned effective field theory, which exhibits relevant interactions not protected by any symmetries and separated by an arbitrary large mass gap from the new "gravitational" physics, represented by a set of irrelevant operators. Nevertheless, the only available language to describe dynamics at all energy scales does not require any fine-tuning. The interesting novel feature of this construction is that UV physics is not described by a fixed point, but rather exhibits asymptotic fragility. Observation of additional unprotected scalars at the LHC would be a smoking gun for this scenario. Natural tuning also favors TeV scale unification.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
Cosmology: A research briefing
NASA Technical Reports Server (NTRS)
1995-01-01
As part of its effort to update topics dealt with in the 1986 decadal physics survey, the Board on Physics and Astronomy of the National Research Council (NRC) formed a Panel on Cosmology. The Panel produced this report, intended to be accessible to science policymakers and nonscientists. The chapters include an overview ('What Is Cosmology?'), a discussion of cosmic microwave background radiation, the large-scale structure of the universe, the distant universe, and physics of the early universe.
The large-scale effect of environment on galactic conformity
NASA Astrophysics Data System (ADS)
Sun, Shuangpeng; Guo, Qi; Wang, Lan; Lacey, Cedric G.; Wang, Jie; Gao, Liang; Pan, Jun
2018-07-01
We use a volume-limited galaxy sample from the Sloan Digital Sky Survey Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In underdense regions most neighbour galaxies tend to be active, while in overdense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.
Cosmic Rays and Gamma-Rays in Large-Scale Structure
NASA Astrophysics Data System (ADS)
Inoue, Susumu; Nagashima, Masahiro; Suzuki, Takeru K.; Aoki, Wako
2004-12-01
During the hierarchical formation of large scale structure in the universe, the progressive collapse and merging of dark matter should inevitably drive shocks into the gas, with nonthermal particle acceleration as a natural consequence. Two topics in this regard are discussed, emphasizing what important things nonthermal phenomena may tell us about the structure formation (SF) process itself. 1. Inverse Compton gamma-rays from large scale SF shocks and non-gravitational effects, and the implications for probing the warm-hot intergalactic medium. We utilize a semi-analytic approach based on Monte Carlo merger trees that treats both merger and accretion shocks self-consistently. 2. Production of 6Li by cosmic rays from SF shocks in the early Galaxy, and the implications for probing Galaxy formation and uncertain physics on sub-Galactic scales. Our new observations of metal-poor halo stars with the Subaru High Dispersion Spectrograph are highlighted.
NASA Astrophysics Data System (ADS)
Mummery, Benjamin O.; McCarthy, Ian G.; Bird, Simeon; Schaye, Joop
2017-10-01
We use the cosmo-OWLS and bahamas suites of cosmological hydrodynamical simulations to explore the separate and combined effects of baryon physics (particularly feedback from active galactic nuclei, AGN) and free streaming of massive neutrinos on large-scale structure. We focus on five diagnostics: (I) the halo mass function, (II) halo mass density profiles, (III) the halo mass-concentration relation, (IV) the clustering of haloes and (v) the clustering of matter, and we explore the extent to which the effects of baryon physics and neutrino free streaming can be treated independently. Consistent with previous studies, we find that both AGN feedback and neutrino free streaming suppress the total matter power spectrum, although their scale and redshift dependences differ significantly. The inclusion of AGN feedback can significantly reduce the masses of groups and clusters, and increase their scale radii. These effects lead to a decrease in the amplitude of the mass-concentration relation and an increase in the halo autocorrelation function at fixed mass. Neutrinos also lower the masses of groups and clusters while having no significant effect on the shape of their density profiles (thus also affecting the mass-concentration relation and halo clustering in a qualitatively similar way to feedback). We show that, with only a small number of exceptions, the combined effects of baryon physics and neutrino free streaming on all five diagnostics can be estimated to typically better than a few per cent accuracy by treating these processes independently (I.e. by multiplying their separate effects).
NASA Astrophysics Data System (ADS)
Gerszewski, Daniel James
Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present several enhancements to model-reduced fluid simulation that allow improved simulation bases and two-way solid-fluid coupling. Specifically, we present a basis enrichment scheme that allows us to combine data-driven or artistically derived bases with more general analytic bases derived from Laplacian Eigenfunctions. Additionally, we handle two-way solid-fluid coupling in a time-splitting fashion---we alternately timestep the fluid and rigid body simulators, while taking into account the effects of the fluid on the rigid bodies and vice versa. We employ the vortex panel method to handle solid-fluid coupling and use dynamic pressure to compute the effect of the fluid on rigid bodies. Taken together, these contributions have advanced the state-of-the art in physics-based animation and are practical enough to be used in production pipelines.
NASA Technical Reports Server (NTRS)
Chase, R.; Mcgoldrick, L.
1984-01-01
The importance of large-scale ocean movements to the moderation of Global Temperature is discussed. The observational requirements of physical oceanography are discussed. Satellite-based oceanographic observing systems are seen as central to oceanography in 1990's.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig
It is argued by extrapolation of general relativity and quantum mechanics that a classical inertial frame corresponds to a statistically defined observable that rotationally fluctuates due to Planck scale indeterminacy. Physical effects of exotic nonlocal rotational correlations on large scale field states are estimated. Their entanglement with the strong interaction vacuum is estimated to produce a universal, statistical centrifugal acceleration that resembles the observed cosmological constant.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-09-29
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...
2017-09-29
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less
The power of structural modeling of sub-grid scales - application to astrophysical plasmas
NASA Astrophysics Data System (ADS)
Georgiev Vlaykov, Dimitar; Grete, Philipp
2015-08-01
In numerous astrophysical phenomena the dynamical range can span 10s of orders of magnitude. This implies more than billions of degrees-of-freedom and precludes direct numerical simulations from ever being a realistic possibility. A physical model is necessary to capture the unresolved physics occurring at the sub-grid scales (SGS).Structural modeling is a powerful concept which renders itself applicable to various physical systems. It stems from the idea of capturing the structure of the SGS terms in the evolution equations based on the scale-separation mechanism and independently of the underlying physics. It originates in the hydrodynamics field of large-eddy simulations. We apply it to the study of astrophysical MHD.Here, we present a non-linear SGS model for compressible MHD turbulence. The model is validated a priori at the tensorial, vectorial and scalar levels against of set of high-resolution simulations of stochastically forced homogeneous isotropic turbulence in a periodic box. The parameter space spans 2 decades in sonic Mach numbers (0.2 - 20) and approximately one decade in magnetic Mach number ~(1-8). This covers the super-Alfvenic sub-, trans-, and hyper-sonic regimes, with a range of plasma beta from 0.05 to 25. The Reynolds number is of the order of 103.At the tensor level, the model components correlate well with the turbulence ones, at the level of 0.8 and above. Vectorially, the alignment with the true SGS terms is encouraging with more than 50% of the model within 30° of the data. At the scalar level we look at the dynamics of the SGS energy and cross-helicity. The corresponding SGS flux terms have median correlations of ~0.8. Physically, the model represents well the two directions of the energy cascade.In comparison, traditional functional models exhibit poor local correlations with the data already at the scalar level. Vectorially, they are indifferent to the anisotropy of the SGS terms. They often struggle to represent the energy backscatter from small to large scales as well as the turbulent dynamo mechanism.Overall, the new model surpasses the traditional ones in all tests by a large margin.
Successes and Challenges in Transitioning to Large Enrollment NEXUS/Physics IPLS Labs
NASA Astrophysics Data System (ADS)
Moore, Kimberly
2017-01-01
UMd-PERG's NEXUS/Physics for Life Sciences laboratory curriculum, piloted in 2012-2013 in small test classes, has been implemented in large-enrollment environments at UMD from 2013-present. These labs address physical issues at biological scales using microscopy, image and video analysis, electrophoresis, and spectroscopy in an open, non-protocol-driven environment. We have collected a wealth of data (surveys, video analysis, etc.) that enables us to get a sense of the students' responses to this curriculum in a large-enrollment environment and with teaching assistants both `new to' and `experienced in' the labs. In this talk, we will provide a brief overview of what we have learned, including the challenges of transitioning to large N, student perception then and now, and comparisons of our large-enrollment results to the results from our pilot study. We will close with a discussion of the acculturation of teaching assistants to this novel environment and suggestions for sustainability.
Controlling high-throughput manufacturing at the nano-scale
NASA Astrophysics Data System (ADS)
Cooper, Khershed P.
2013-09-01
Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.
Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model
NASA Astrophysics Data System (ADS)
Kumar, M.; Duffy, C.
2006-05-01
Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.
NASA Astrophysics Data System (ADS)
Park, Kiwan
2017-12-01
In our conventional understanding, large-scale magnetic fields are thought to originate from an inverse cascade in the presence of magnetic helicity, differential rotation or a magneto-rotational instability. However, as recent simulations have given strong indications that an inverse cascade (transfer) may occur even in the absence of magnetic helicity, the physical origin of this inverse cascade is still not fully understood. We here present two simulations of freely decaying helical and non-helical magnetohydrodynamic (MHD) turbulence. We verified the inverse transfer of helical and non-helical magnetic fields in both cases, but we found the underlying physical principles to be fundamentally different. In the former case, the helical magnetic component leads to an inverse cascade of magnetic energy. We derived a semi-analytic formula for the evolution of large-scale magnetic field using α coefficient and compared it with the simulation data. But in the latter case, the α effect, including other conventional dynamo theories, is not suitable to describe the inverse transfer of non-helical magnetic energy. To obtain a better understanding of the physics at work here, we introduced a 'field structure model' based on the magnetic induction equation in the presence of inhomogeneities. This model illustrates how the curl of the electromotive force leads to the build up of a large-scale magnetic field without the requirement of magnetic helicity. And we applied a quasi-normal approximation to the inverse transfer of magnetic energy.
NASA Astrophysics Data System (ADS)
Li, Dongqing; Guo, Hongbo; Peng, Hui; Gong, Shengkai; Xu, Huibin
2013-10-01
The cyclic oxidation behavior of Dy/Hf-doped β-NiAl coatings produced by electron beam physical vapor deposition (EB-PVD) was investigated. For the undoped NiAl coating, numerous voids were formed at the alumina scale/coating interface and large rumpling developed in the scale, leading to premature oxide spallation. The addition of Dy and Hf both improved scale adhesion and the alumina scale grown on the NiAl-Hf coating showed better adhesion than that on the NiAl-Dy coating, although the suppressing effect on interfacial void formation and the scale rumpling resistance were stronger in the NiAl-Dy coating. It is proposed that the segregation of Dy and Hf ions at the scale/coating interfaces not only prevent interfacial sulfur segregation but also may directly enhance interfacial adhesion by participating in bonding across the interfaces, and this strengthening effect is relatively stronger for Hf ionic segregation.
Tonkin, Jonathan D.; Shah, Deep Narayan; Kuemmerlen, Mathias; Li, Fengqing; Cai, Qinghua; Haase, Peter; Jähnig, Sonja C.
2015-01-01
Little work has been done on large-scale patterns of stream insect richness in China. We explored the influence of climatic and catchment-scale factors on stream insect (Ephemeroptera, Plecoptera, Trichoptera; EPT) richness across mid-latitude China. We assessed the predictive ability of climatic, catchment land cover and physical structure variables on genus richness of EPT, both individually and combined, in 80 mid-latitude Chinese streams, spanning a 3899-m altitudinal gradient. We performed analyses using boosted regression trees and explored the nature of their influence on richness patterns. The relative importance of climate, land cover, and physical factors on stream insect richness varied considerably between the three orders, and while important for Ephemeroptera and Plecoptera, latitude did not improve model fit for any of the groups. EPT richness was linked with areas comprising high forest cover, elevation and slope, large catchments and low temperatures. Ephemeroptera favoured areas with high forest cover, medium-to-large catchment sizes, high temperature seasonality, and low potential evapotranspiration. Plecoptera richness was linked with low temperature seasonality and annual mean, and high slope, elevation and warm-season rainfall. Finally, Trichoptera favoured high elevation areas, with high forest cover, and low mean annual temperature, seasonality and aridity. Our findings highlight the variable role that catchment land cover, physical properties and climatic influences have on stream insect richness. This is one of the first studies of its kind in Chinese streams, thus we set the scene for more in-depth assessments of stream insect richness across broader spatial scales in China, but stress the importance of improving data availability and consistency through time. PMID:25909190
Scale-free networks which are highly assortative but not small world
NASA Astrophysics Data System (ADS)
Small, Michael; Xu, Xiaoke; Zhou, Jin; Zhang, Jie; Sun, Junfeng; Lu, Jun-An
2008-06-01
Uncorrelated scale-free networks are necessarily small world (and, in fact, smaller than small world). Nonetheless, for scale-free networks with correlated degree distribution this may not be the case. We describe a mechanism to generate highly assortative scale-free networks which are not small world. We show that it is possible to generate scale-free networks, with arbitrary degree exponent γ>1 , such that the average distance between nodes in the network is large. To achieve this, nodes are not added to the network with preferential attachment. Instead, we greedily optimize the assortativity of the network. The network generation scheme is physically motivated, and we show that the recently observed global network of Avian Influenza outbreaks arises through a mechanism similar to what we present here. Simulations show that this network exhibits very similar physical characteristics (very high assortativity, clustering, and path length).
New Models and Methods for the Electroweak Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Linda
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less
Facility for Antiproton and Ion Research, FAIR, at the GSI site
NASA Astrophysics Data System (ADS)
Rosner, Guenther
2006-11-01
FAIR is a new large-scale particle accelerator facility to be built at the GSI site in Germany. The research pursued at FAIR will cover a wide range of topics in nuclear and hadron physics, as well as high density plasma physics, atomic and antimatter physics, and applications in condensed matter physics and biology. The working horse of FAIR will be a 1.1km circumference double ring of rapidly cycling 100 and 300Tm synchrotrons, which will be used to produce high intensity secondary beams of short-lived radioactive ions or antiprotons. A subsequent suite of cooler and storage rings will deliver heavy ion and antiproton beams of unprecedented quality. Large experimental facilities are presently being designed by the NUSTAR, PANDA, PAX, CBM, SPARC, FLAIR, HEDgeHOB and BIOMAT collaborations.
ERIC Educational Resources Information Center
Leavy, Justine E.; Rosenberg, Michael; Bauman, Adrian E.; Bull, Fiona C.; Giles-Corti, Billie; Shilton, Trevor; Maitland, Clover; Barnes, Rosanne
2013-01-01
Background: Internationally, over the last four decades large-scale mass media campaigns have been delivered to promote physical activity and its associated health benefits. In 2002-2005, the first Western Australian statewide adult physical activity campaign "Find Thirty. It's Not a Big Exercise" was launched. In 2007, a new iteration…
ERIC Educational Resources Information Center
Rushton, Gregory T.; Rosengrant, David; Dewar, Andrew; Shah, Lisa; Ray, Herman E.; Sheppard, Keith; Watanabe, Lynn
2017-01-01
Efforts to improve the number and quality of the high school physics teaching workforce have taken several forms, including those sponsored by professional organizations. Using a series of large-scale teacher demographic data sets from the National Center for Education Statistics (NCES), this study sought to investigate trends in teacher quality…
Mapping the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Manzotti, A.; Dodelson, S.
2014-12-01
On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.
NASA Astrophysics Data System (ADS)
Autiero, D.; Äystö, J.; Badertscher, A.; Bezrukov, L.; Bouchez, J.; Bueno, A.; Busto, J.; Campagne, J.-E.; Cavata, Ch; Chaussard, L.; de Bellefon, A.; Déclais, Y.; Dumarchez, J.; Ebert, J.; Enqvist, T.; Ereditato, A.; von Feilitzsch, F.; Fileviez Perez, P.; Göger-Neff, M.; Gninenko, S.; Gruber, W.; Hagner, C.; Hess, M.; Hochmuth, K. A.; Kisiel, J.; Knecht, L.; Kreslo, I.; Kudryavtsev, V. A.; Kuusiniemi, P.; Lachenmaier, T.; Laffranchi, M.; Lefievre, B.; Lightfoot, P. K.; Lindner, M.; Maalampi, J.; Maltoni, M.; Marchionni, A.; Marrodán Undagoitia, T.; Marteau, J.; Meregaglia, A.; Messina, M.; Mezzetto, M.; Mirizzi, A.; Mosca, L.; Moser, U.; Müller, A.; Natterer, G.; Oberauer, L.; Otiougova, P.; Patzak, T.; Peltoniemi, J.; Potzel, W.; Pistillo, C.; Raffelt, G. G.; Rondio, E.; Roos, M.; Rossi, B.; Rubbia, A.; Savvinov, N.; Schwetz, T.; Sobczyk, J.; Spooner, N. J. C.; Stefan, D.; Tonazzo, A.; Trzaska, W.; Ulbricht, J.; Volpe, C.; Winter, J.; Wurm, M.; Zalewska, A.; Zimmermann, R.
2007-11-01
This document reports on a series of experimental and theoretical studies conducted to assess the astro-particle physics potential of three future large scale particle detectors proposed in Europe as next generation underground observatories. The proposed apparatuses employ three different and, to some extent, complementary detection techniques: GLACIER (liquid argon TPC), LENA (liquid scintillator) and MEMPHYS (water Cherenkov), based on the use of large mass of liquids as active detection media. The results of these studies are presented along with a critical discussion of the performance attainable by the three proposed approaches coupled to existing or planned underground laboratories, in relation to open and outstanding physics issues such as the search for matter instability, the detection of astrophysical neutrinos and geo-neutrinos and to the possible use of these detectors in future high intensity neutrino beams.
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
NASA Astrophysics Data System (ADS)
Csanady, G. T.
2001-03-01
In recent years air-sea interaction has emerged as a subject in its own right, encompassing small-scale and large-scale processes in both air and sea. Air-Sea Interaction: Laws and Mechanisms is a comprehensive account of how the atmosphere and the ocean interact to control the global climate, what physical laws govern this interaction, and its prominent mechanisms. The topics covered range from evaporation in the oceans, to hurricanes, and on to poleward heat transport by the oceans. By developing the subject from basic physical (thermodynamic) principles, the book is accessible to graduate students and research scientists in meteorology, oceanography, and environmental engineering. It will also be of interest to the broader physics community involved in the treatment of transfer laws, and thermodynamics of the atmosphere and ocean.
Flood events across the North Atlantic region - past development and future perspectives
NASA Astrophysics Data System (ADS)
Matti, Bettina; Dieppois, Bastien; Lawler, Damian; Dahlke, Helen E.; Lyon, Steve W.
2016-04-01
Flood events have a large impact on humans, both socially and economically. An increase in winter and spring flooding across much of northern Europe in recent years opened up the question of changing underlying hydro-climatic drivers of flood events. Predicting the manifestation of such changes is difficult due to the natural variability and fluctuations in northern hydrological systems caused by large-scale atmospheric circulations, especially under altered climate conditions. Improving knowledge on the complexity of these hydrological systems and their interactions with climate is essential to be able to determine drivers of flood events and to predict changes in these drivers under altered climate conditions. This is particularly true for the North Atlantic region where both physical catchment properties and large-scale atmospheric circulations have a profound influence on floods. This study explores changes in streamflow across North Atlantic region catchments. An emphasis is placed on high-flow events, namely the timing and magnitude of past flood events, and selected flood percentiles were tested for stationarity by applying a flood frequency analysis. The issue of non-stationarity of flood return periods is important when linking streamflow to large-scale atmospheric circulations. Natural fluctuations in these circulations are found to have a strong influence on the outcome causing natural variability in streamflow records. Long time series and a multi-temporal approach allows for determining drivers of floods and linking streamflow to large-scale atmospheric circulations. Exploring changes in selected hydrological signatures consistency was found across much of the North Atlantic region suggesting a shift in flow regime. The lack of an overall regional pattern suggests that how catchments respond to changes in climatic drivers is strongly influenced by their physical characteristics. A better understanding of hydrological response to climate drivers is essential for example for forecasting purposes.
VizieR Online Data Catalog: Isolated galaxies, pairs and triplets (Argudo-Fernandez+, 2015)
NASA Astrophysics Data System (ADS)
Argudo-Fernandez, M.; Verley, S.; Bergond, G.; Duarte Puertas, S.; Ramos Carmona, E.; Sabater, J.; Fernandez, Lorenzo M.; Espada, D.; Sulentic, J.; Ruiz, J. E.; Leon, S.
2015-04-01
Catalogues of isolated galaxies, isolated pairs, and isolated triplets in the local Universe with positions, redshifts, and degrees of relation with their physical and large-scale environments. (5 data files).
NASA Astrophysics Data System (ADS)
Olvera de La Cruz, Monica
Polymer electrolytes have been particularly difficult to describe theoretically given the large number of disparate length scales involved in determining their physical properties. The Debye length, the Bjerrum length, the ion size, the chain length, and the distance between the charges along their backbones determine their structure and their response to external fields. We have developed an approach that uses multi-scale calculations with the capability of demonstrating the phase behavior of polymer electrolytes and of providing a conceptual understanding of how charge dictates nano-scale structure formation. Moreover, our molecular dynamics simulations have provided an understanding of the coupling of their conformation to their dynamics, which is crucial to design self-assembling materials, as well as to explore the dynamics of complex electrolytes for energy storage and conversion applications.
The Classroom Sandbox: A Physical Model for Scientific Inquiry
ERIC Educational Resources Information Center
Feldman, Allan; Cooke, Michele L.; Ellsworth, Mary S.
2010-01-01
For scientists, the sandbox serves as an analog for faulting in Earth's crust. Here, the large, slow processes within the crust can be scaled to the size of a table, and time scales are directly observable. This makes it a useful tool for demonstrating the role of inquiry in science. For this reason, the sandbox is also helpful for learning…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fediai, Artem, E-mail: artem.fediai@nano.tu-dresden.de; Ryndyk, Dmitry A.; Center for Advancing Electronics Dresden, TU Dresden, 01062 Dresden
2016-09-05
Using a dedicated combination of the non-equilibrium Green function formalism and large-scale density functional theory calculations, we investigated how incomplete metal coverage influences two of the most important electrical properties of carbon nanotube (CNT)-based transistors: contact resistance and its scaling with contact length, and maximum current. These quantities have been derived from parameter-free simulations of atomic systems that are as close as possible to experimental geometries. Physical mechanisms that govern these dependences have been identified for various metals, representing different CNT-metal interaction strengths from chemisorption to physisorption. Our results pave the way for an application-oriented design of CNT-metal contacts.
Universal nonlinear small-scale dynamo.
Beresnyak, A
2012-01-20
We consider astrophysically relevant nonlinear MHD dynamo at large Reynolds numbers (Re). We argue that it is universal in a sense that magnetic energy grows at a rate which is a constant fraction C(E) of the total turbulent dissipation rate. On the basis of locality bounds we claim that this "efficiency of the small-scale dynamo", C(E), is a true constant for large Re and is determined only by strongly nonlinear dynamics at the equipartition scale. We measured C(E) in numerical simulations and observed a value around 0.05 in the highest resolution simulations. We address the issue of C(E) being small, unlike the Kolmogorov constant which is of order unity. © 2012 American Physical Society
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiuchi, Shunsaku, E-mail: horiuchi@vt.edu
2016-06-21
The cold dark matter paradigm has been extremely successful in explaining the large-scale structure of the Universe. However, it continues to face issues when confronted by observations on sub-Galactic scales. A major caveat, now being addressed, has been the incomplete treatment of baryon physics. We first summarize the small-scale issues surrounding cold dark matter and discuss the solutions explored by modern state-of-the-art numerical simulations including treatment of baryonic physics. We identify the too big to fail in field galaxies as among the best targets to study modifications to dark matter, and discuss the particular connection with sterile neutrino warm darkmore » matter. We also discuss how the recently detected anomalous 3.55 keV X-ray lines, when interpreted as sterile neutrino dark matter decay, provide a very good description of small-scale observations of the Local Group.« less
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick
2017-04-01
Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Gang
Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less
Applications of large-scale density functional theory in biology
NASA Astrophysics Data System (ADS)
Cole, Daniel J.; Hine, Nicholas D. M.
2016-10-01
Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.
Large-scale tidal effect on redshift-space power spectrum in a finite-volume survey
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro; Li, Yin
2017-04-01
Long-wavelength matter inhomogeneities contain cleaner information on the nature of primordial perturbations as well as the physics of the early Universe. The large-scale coherent overdensity and tidal force, not directly observable for a finite-volume galaxy survey, are both related to the Hessian of large-scale gravitational potential and therefore are of equal importance. We show that the coherent tidal force causes a homogeneous anisotropic distortion of the observed distribution of galaxies in all three directions, perpendicular and parallel to the line-of-sight direction. This effect mimics the redshift-space distortion signal of galaxy peculiar velocities, as well as a distortion by the Alcock-Paczynski effect. We quantify its impact on the redshift-space power spectrum to the leading order, and discuss its importance for ongoing and upcoming galaxy surveys.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework
NASA Astrophysics Data System (ADS)
Jung, J.; Konor, C. S.; Randall, D. A.
2017-12-01
The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.
Scaling of the Urban Water Footprint: An Analysis of 65 Mid- to Large-Sized U.S. Metropolitan Areas
NASA Astrophysics Data System (ADS)
Mahjabin, T.; Garcia, S.; Grady, C.; Mejia, A.
2017-12-01
Scaling laws have been shown to be relevant to a range of disciplines including biology, ecology, hydrology, and physics, among others. Recently, scaling was shown to be important for understanding and characterizing cities. For instance, it was found that urban infrastructure (water supply pipes and electrical wires) tends to scale sublinearly with city population, implying that large cities are more efficient. In this study, we explore the scaling of the water footprint of cities. The water footprint is a measure of water appropriation that considers both the direct and indirect (virtual) water use of a consumer or producer. Here we compute the water footprint of 65 mid- to large-sized U.S. metropolitan areas, accounting for direct and indirect water uses associated with agricultural and industrial commodities, and residential and commercial water uses. We find that the urban water footprint, computed as the sum of the water footprint of consumption and production, exhibits sublinear scaling with an exponent of 0.89. This suggests the possibility of large cities being more water-efficient than small ones. To further assess this result, we conduct additional analysis by accounting for international flows, and the effects of green water and city boundary definition on the scaling. The analysis confirms the scaling and provides additional insight about its interpretation.
The Origin of Scales and Scaling Laws in Star Formation
NASA Astrophysics Data System (ADS)
Guszejnov, David; Hopkins, Philip; Grudich, Michael
2018-01-01
Star formation is one of the key processes of cosmic evolution as it influences phenomena from the formation of galaxies to the formation of planets, and the development of life. Unfortunately, there is no comprehensive theory of star formation, despite intense effort on both the theoretical and observational sides, due to the large amount of complicated, non-linear physics involved (e.g. MHD, gravity, radiation). A possible approach is to formulate simple, easily testable models that allow us to draw a clear connection between phenomena and physical processes.In the first part of the talk I will focus on the origin of the IMF peak, the characteristic scale of stars. There is debate in the literature about whether the initial conditions of isothermal turbulence could set the IMF peak. Using detailed numerical simulations, I will demonstrate that not to be the case, the initial conditions are "forgotten" through the fragmentation cascade. Additional physics (e.g. feedback) is required to set the IMF peak.In the second part I will use simulated galaxies from the Feedback in Realistic Environments (FIRE) project to show that most star formation theories are unable to reproduce the near universal IMF peak of the Milky Way.Finally, I will present analytic arguments (supported by simulations) that a large number of observables (e.g. IMF slope) are the consequences of scale-free structure formation and are (to first order) unsuitable for differentiating between star formation theories.
Sandy beaches: state of the art of nematode ecology.
Maria, Tatiana F; Vanaverbeke, Jan; Vanreusel, Ann; Esteves, André M
2016-01-01
In this review, we summarize existing knowledge of the ecology of sandy-beach nematodes, in relation to spatial distribution, food webs, pollution and climate change. We attempt to discuss spatial scale patterns (macro-, meso- and microscale) according to their degree of importance in structuring sandy-beach nematode assemblages. This review will provide a substantial background on current knowledge of sandy-beach nematodes, and can be used as a starting point to delineate further investigations in this field. Over decades, sandy beaches have been the scene of studies focusing on community and population ecology, both related to morphodynamic models. The combination of physical factors (e.g. grain size, tidal exposure) and biological interactions (e.g. trophic relationships) is responsible for the spatial distribution of nematodes. In other words, the physical factors are more important in structuring nematodes communities over large scale of distribution while biological interactions are largely important in finer-scale distributions. It has been accepted that biological interactions are assumed to be of minor importance because physical factors overshadow the biological interactions in sandy beach sediments; however, the most recent results from in-situ and ex-situ experimental investigations on behavior and biological factors on a microscale have shown promise for understanding the mechanisms underlying larger-scale patterns and processes. Besides nematodes are very promising organisms used to understand the effects of pollution and climate changes although these subjects are less studied in sandy beaches than distribution patterns.
Nian, Qiong; Callahan, Michael; Saei, Mojib; Look, David; Efstathiadis, Harry; Bailey, John; Cheng, Gary J.
2015-01-01
A new method combining aqueous solution printing with UV Laser crystallization (UVLC) and post annealing is developed to deposit highly transparent and conductive Aluminum doped Zinc Oxide (AZO) films. This technique is able to rapidly produce large area AZO films with better structural and optoelectronic properties than most high vacuum deposition, suggesting a potential large-scale manufacturing technique. The optoelectronic performance improvement attributes to UVLC and forming gas annealing (FMG) induced grain boundary density decrease and electron traps passivation at grain boundaries. The physical model and computational simulation developed in this work could be applied to thermal treatment of many other metal oxide films. PMID:26515670
ERIC Educational Resources Information Center
Mujtaba, Tamjid; Reiss, Michael J.
2016-01-01
This article explores how students' aspirations to study mathematics or physics in post-16 education are associated with their perceptions of their education, their motivations, and the support they feel they received. The analysis is based on the responses of around 10,000 students in England in Year 8 (age 12-13) and then in Year 10 (age 14-15).…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-12
.... lactis is a natural and indispensable component of cultured dairy processes (including yogurt, cheese and... experiments, BL1 physical containment is recommended. For large-scale fermentation experiments, the...
Computational Cosmology: From the Early Universe to the Large Scale Structure.
Anninos, Peter
2001-01-01
In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations (and numerical methods applied to specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.
Computational Cosmology: from the Early Universe to the Large Scale Structure.
Anninos, Peter
1998-01-01
In order to account for the observable Universe, any comprehensive theory or model of cosmology must draw from many disciplines of physics, including gauge theories of strong and weak interactions, the hydrodynamics and microphysics of baryonic matter, electromagnetic fields, and spacetime curvature, for example. Although it is difficult to incorporate all these physical elements into a single complete model of our Universe, advances in computing methods and technologies have contributed significantly towards our understanding of cosmological models, the Universe, and astrophysical processes within them. A sample of numerical calculations addressing specific issues in cosmology are reviewed in this article: from the Big Bang singularity dynamics to the fundamental interactions of gravitational waves; from the quark-hadron phase transition to the large scale structure of the Universe. The emphasis, although not exclusively, is on those calculations designed to test different models of cosmology against the observed Universe.
From Wake Steering to Flow Control
Fleming, Paul A.; Annoni, Jennifer; Churchfield, Matthew J.; ...
2017-11-22
In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less
Fault-tolerant Control of a Cyber-physical System
NASA Astrophysics Data System (ADS)
Roxana, Rusu-Both; Eva-Henrietta, Dulf
2017-10-01
Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.
The cosmic spiderweb: equivalence of cosmic, architectural and origami tessellations.
Neyrinck, Mark C; Hidding, Johan; Konstantatou, Marina; van de Weygaert, Rien
2018-04-01
For over 20 years, the term 'cosmic web' has guided our understanding of the large-scale arrangement of matter in the cosmos, accurately evoking the concept of a network of galaxies linked by filaments. But the physical correspondence between the cosmic web and structural engineering or textile 'spiderwebs' is even deeper than previously known, and also extends to origami tessellations. Here, we explain that in a good structure-formation approximation known as the adhesion model, threads of the cosmic web form a spiderweb, i.e. can be strung up to be entirely in tension. The correspondence is exact if nodes sampling voids are included, and if structure is excluded within collapsed regions (walls, filaments and haloes), where dark-matter multistreaming and baryonic physics affect the structure. We also suggest how concepts arising from this link might be used to test cosmological models: for example, to test for large-scale anisotropy and rotational flows in the cosmos.
The cosmic spiderweb: equivalence of cosmic, architectural and origami tessellations
NASA Astrophysics Data System (ADS)
Neyrinck, Mark C.; Hidding, Johan; Konstantatou, Marina; van de Weygaert, Rien
2018-04-01
For over 20 years, the term `cosmic web' has guided our understanding of the large-scale arrangement of matter in the cosmos, accurately evoking the concept of a network of galaxies linked by filaments. But the physical correspondence between the cosmic web and structural engineering or textile `spiderwebs' is even deeper than previously known, and also extends to origami tessellations. Here, we explain that in a good structure-formation approximation known as the adhesion model, threads of the cosmic web form a spiderweb, i.e. can be strung up to be entirely in tension. The correspondence is exact if nodes sampling voids are included, and if structure is excluded within collapsed regions (walls, filaments and haloes), where dark-matter multistreaming and baryonic physics affect the structure. We also suggest how concepts arising from this link might be used to test cosmological models: for example, to test for large-scale anisotropy and rotational flows in the cosmos.
Entanglement in a Quantum Annealing Processor
2016-09-07
that QA is a viable technology for large- scale quantum computing . DOI: 10.1103/PhysRevX.4.021041 Subject Areas: Quantum Physics, Quantum Information...Superconductivity I. INTRODUCTION The past decade has been exciting for the field of quantum computation . A wide range of physical imple- mentations...measurements used in studying prototype universal quantum computers [9–14]. These constraints make it challenging to experimentally determine whether a scalable
PHYSICS OF OUR DAYS: Dark energy: myths and reality
NASA Astrophysics Data System (ADS)
Lukash, V. N.; Rubakov, V. A.
2008-03-01
We discuss the questions related to dark energy in the Universe. We note that in spite of the effect of dark energy, large-scale structure is still being generated in the Universe and this will continue for about ten billion years. We also comment on some statements in the paper "Dark energy and universal antigravitation" by A D Chernin, Physics Uspekhi 51 (3) (2008).
The use of smoke acid as an alternative coagulating agent for natural rubber sheets' production.
Ferreira, Vanda S; Rêgo, Ione N C; Pastore, Floriano; Mandai, Mariana M; Mendes, Leonardo S; Santos, Karin A M; Rubim, Joel C; Suarez, Paulo A Z
2005-03-01
A comparative study of rubber sheets obtained using formic, acetic, and smoke acid as coagulants is shown for latex obtained from native Amazonian trees and also from commercial cultivated trees. The evaluation of both processes of coagulation was carried out by spectroscopic and physical-chemical analysis, showing no differences in the rubber sheets obtained. This new method of rubber sheet preparation was introduced into Amazonian rainforest rubber tapper communities, which are actually producing in large scale. The physical-mechanical properties were similar among a large sheets made by different rubber tapper communities using this new method.
Facility for Antiproton and Ion Research, FAIR, at the GSI site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosner, Guenther
FAIR is a new large-scale particle accelerator facility to be built at the GSI site in Germany. The research pursued at FAIR will cover a wide range of topics in nuclear and hadron physics, as well as high density plasma physics, atomic and antimatter physics, and applications in condensed matter physics and biology. The working horse of FAIR will be a 1.1km circumference double ring of rapidly cycling 100 and 300Tm synchrotrons, which will be used to produce high intensity secondary beams of short-lived radioactive ions or antiprotons. A subsequent suite of cooler and storage rings will deliver heavy ionmore » and antiproton beams of unprecedented quality. Large experimental facilities are presently being designed by the NUSTAR, PANDA, PAX, CBM, SPARC, FLAIR, HEDgeHOB and BIOMAT collaborations.« less
Scaling and criticality in a stochastic multi-agent model of a financial market
NASA Astrophysics Data System (ADS)
Lux, Thomas; Marchesi, Michele
1999-02-01
Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.
Transmission of chirality through space and across length scales
NASA Astrophysics Data System (ADS)
Morrow, Sarah M.; Bissette, Andrew J.; Fletcher, Stephen P.
2017-05-01
Chirality is a fundamental property and vital to chemistry, biology, physics and materials science. The ability to use asymmetry to operate molecular-level machines or macroscopically functional devices, or to give novel properties to materials, may address key challenges at the heart of the physical sciences. However, how chirality at one length scale can be translated to asymmetry at a different scale is largely not well understood. In this Review, we discuss systems where chiral information is translated across length scales and through space. A variety of synthetic systems involve the transmission of chiral information between the molecular-, meso- and macroscales. We show how fundamental stereochemical principles may be used to design and understand nanoscale chiral phenomena and highlight important recent advances relevant to nanotechnology. The survey reveals that while the study of stereochemistry on the nanoscale is a rich and dynamic area, our understanding of how to control and harness it and dial-up specific properties is still in its infancy. The long-term goal of controlling nanoscale chirality promises to be an exciting journey, revealing insight into biological mechanisms and providing new technologies based on dynamic physical properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scales, John
The broad purpose of CSM's 6-year (3 years plus renewal) DOE project was to develop and apply new experimental physics technology to the material characterization of rocks at the grain scale or smaller. This is motivated by a knowledge that the bulk chemistry and physics of rocks are strongly influenced by processes occurring at the grain scale: the flow of fluids, cation exchange, the state of cementation of grains, and many more. It may also be possible in some cases to ``upscale'' or homogenize the mesoscopic properties of rocks in order to directly infer the large-scale properties of formations, butmore » that is not our central goal. Understanding the physics and chemistry at the small scale is. During the first 3 years, most effort was devoted to developing and validating the near-field scanning technology. During the 3 year renewal phase, most effort was focused on applying the technology in the labs Professors Batzle (now deceased) in Geophysics and Prasad in Petroleum engineering.« less
The biomechanical demands of manual scaling on the shoulders & neck of dental hygienists.
La Delfa, Nicholas J; Grondin, Diane E; Cox, Jocelyn; Potvin, Jim R; Howarth, Samuel J
2017-01-01
The purpose of this study was to evaluate the postural and muscular demands placed on the shoulders and neck of dental hygienists when performing a simulated manual scaling task. Nineteen healthy female dental hygienists performed 30-min of simulated manual scaling on a manikin head in a laboratory setting. Surface electromyography was used to monitor muscle activity from several neck and shoulder muscles, and neck and arm elevation kinematics were evaluated using motion capture. The simulated scaling task resulted in a large range of neck and arm elevation angles and excessive low-level muscular demands in the neck extensor and scapular stabilising muscles. The physical demands varied depending on the working position of the hygienists relative to the manikin head. These findings are valuable in guiding future ergonomics interventions aimed at reducing the physical exposures of dental hygiene work. Practitioner Summary: Given that this study evaluates the physical demands of manual scaling, a procedure that is fundamental to dental hygiene work, the findings are valuable to identify ergonomics interventions to reduce the prevalence of work-related injuries, disability and the potential for early retirement among this occupational group.
Tew, Garry A; Jones, Katherine; Mikocka-Walus, Antonina
2016-12-01
Limited evidence suggests that physical activity has beneficial effects in people with inflammatory bowel disease (IBD). This study aimed to determine the physical activity habits of adults with IBD, the limitations to physical activity they experience because of their disease, and the extent to which their physical activity is affected by various demographic, clinical, and psychological factors. Data were collected on 859 adult participants (52% with Crohn's disease, 75% women) through an online survey conducted between May and June 2016. Measures included physical activity (International Physical Activity Questionnaire), psychological symptoms (Hospital Anxiety and Depression Scale), fatigue (subitems of IBD fatigue scale), exercise perceptions (Exercise Benefits/Barriers Scale), and disease activity. Regression analyses were used to identify predictors of physical activity. Only 17% of respondents were categorized as "high active." Self-reported physical activity levels decreased, and fatigue and psychological scores increased, with increasing disease activity. Walking was the most common activity performed (57% of respondents) and running/jogging the most commonly avoided (34%). Many participants (n = 677) reported that IBD limited their physical activity, for reasons including abdominal/joint pain (70%), fatigue/tiredness (69%), disease flare-up (63%), and increased toilet urgency (61%). Physical activity was independently associated with depression, disease activity, and perceived barriers to exercise in people with Crohn's disease, and depression and age in people with ulcerative or indeterminate colitis (all P ≤ 0.038). This survey highlights several important factors that should be considered by designers of future physical activity interventions for people with IBD.
Automatic location of L/H transition times for physical studies with a large statistical basis
NASA Astrophysics Data System (ADS)
González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA
2012-06-01
Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.
Imaging detectors and electronics—a view of the future
NASA Astrophysics Data System (ADS)
Spieler, Helmuth
2004-09-01
Imaging sensors and readout electronics have made tremendous strides in the past two decades. The application of modern semiconductor fabrication techniques and the introduction of customized monolithic integrated circuits have made large-scale imaging systems routine in high-energy physics. This technology is now finding its way into other areas, such as space missions, synchrotron light sources, and medical imaging. I review current developments and discuss the promise and limits of new technologies. Several detector systems are described as examples of future trends. The discussion emphasizes semiconductor detector systems, but I also include recent developments for large-scale superconducting detector arrays.
A large hadron electron collider at CERN
Abelleira Fernandez, J. L.
2015-04-06
This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and eletron-ion physics. The LHeC is designed to run synchronously withmore » the LHC in the twenties and to achieve an integrated luminosity of O(100)fb –1. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.« less
Micron-scale coherence in interphase chromatin dynamics
Zidovska, Alexandra; Weitz, David A.; Mitchison, Timothy J.
2013-01-01
Chromatin structure and dynamics control all aspects of DNA biology yet are poorly understood, especially at large length scales. We developed an approach, displacement correlation spectroscopy based on time-resolved image correlation analysis, to map chromatin dynamics simultaneously across the whole nucleus in cultured human cells. This method revealed that chromatin movement was coherent across large regions (4–5 µm) for several seconds. Regions of coherent motion extended beyond the boundaries of single-chromosome territories, suggesting elastic coupling of motion over length scales much larger than those of genes. These large-scale, coupled motions were ATP dependent and unidirectional for several seconds, perhaps accounting for ATP-dependent directed movement of single genes. Perturbation of major nuclear ATPases such as DNA polymerase, RNA polymerase II, and topoisomerase II eliminated micron-scale coherence, while causing rapid, local movement to increase; i.e., local motions accelerated but became uncoupled from their neighbors. We observe similar trends in chromatin dynamics upon inducing a direct DNA damage; thus we hypothesize that this may be due to DNA damage responses that physically relax chromatin and block long-distance communication of forces. PMID:24019504
Grid-Enabled High Energy Physics Research using a Beowulf Cluster
NASA Astrophysics Data System (ADS)
Mahmood, Akhtar
2005-04-01
At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.
NASA Astrophysics Data System (ADS)
Gomez-Velez, J. D.; Harvey, J. W.
2014-12-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.
Subgrid-scale models for large-eddy simulation of rotating turbulent flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel
2016-11-01
This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.
Responsibility for children's physical activity: parental, child, and teacher perspectives.
Cox, Michele; Schofield, Grant; Kolt, Gregory S
2010-01-01
Some large-scale child physical activity campaigns have focused on the concept of responsibility, however, there are no measures which establish a link between responsible behavior and physical activity levels. To provide the basis of information required for the development of relevant measurement tools, this study examined the meaning of personal, parental, and third party responsibility for children's physical activity. Eight focus groups, comprising children aged 11-12 yrs, their parents, and teachers from two upper primary schools in Auckland, New Zealand, were conducted. Children (four groups; n=32), their parents (two groups; n=13), and teachers (two groups; n=15) were separated by socio-economic status, and children also by gender. The transcripts from the focus group interviews were then analysed using thematic induction methodology. Across the groups, participants commonly identified a number of behaviors that they felt were indicative of personal, parental, and third party responsibility for children's physical activity. These behaviors formed natural groups with common themes (e.g., self-management, safety), which in most cases were not impacted on by socio-economic status or gender. Responsibility was therefore found to be a concept that could be related to children's physical activity. It was suggested that these behaviors could be used as a starting point in understanding the relationship between responsibility and physical activity, and to assist with the development of measurement tools assessing the relationship between responsibility and levels of physical activity in the future. In turn, this may lead to the development of more targeted messages for large-scale physical activity campaigns. Copyright (c) 2009 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Global Magnetohydrodynamic Modeling of the Solar Corona
NASA Technical Reports Server (NTRS)
Linker, Jon A.; Wagner, William (Technical Monitor)
2001-01-01
The solar corona, the hot, tenuous outer atmosphere of the Sun, exhibits many fascinating phenomena on a wide range of scales. One of the ways that the Sun can affect us here at Earth is through the large-scale structure of the corona and the dynamical phenomena associated with it, as it is the corona that extends outward as the solar wind and encounters the Earth's magnetosphere. The goal of our research sponsored by NASA's Supporting Research and Technology Program in Solar Physics is to develop increasingly realistic models of the large-scale solar corona, so that we can understand the underlying properties of the coronal magnetic field that lead to the observed structure and evolution of the corona. We describe the work performed under this contract.
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
Lix, Lisa M; Wu, Xiuyun; Hopman, Wilma; Mayo, Nancy; Sajobi, Tolulope T; Liu, Juxin; Prior, Jerilynn C; Papaioannou, Alexandra; Josse, Robert G; Towheed, Tanveer E; Davison, K Shawn; Sawatzky, Richard
2016-01-01
Self-reported health status measures, like the Short Form 36-item Health Survey (SF-36), can provide rich information about the overall health of a population and its components, such as physical, mental, and social health. However, differential item functioning (DIF), which arises when population sub-groups with the same underlying (i.e., latent) level of health have different measured item response probabilities, may compromise the comparability of these measures. The purpose of this study was to test for DIF on the SF-36 physical functioning (PF) and mental health (MH) sub-scale items in a Canadian population-based sample. Study data were from the prospective Canadian Multicentre Osteoporosis Study (CaMos), which collected baseline data in 1996-1997. DIF was tested using a multiple indicators multiple causes (MIMIC) method. Confirmatory factor analysis defined the latent variable measurement model for the item responses and latent variable regression with demographic and health status covariates (i.e., sex, age group, body weight, self-perceived general health) produced estimates of the magnitude of DIF effects. The CaMos cohort consisted of 9423 respondents; 69.4% were female and 51.7% were less than 65 years. Eight of 10 items on the PF sub-scale and four of five items on the MH sub-scale exhibited DIF. Large DIF effects were observed on PF sub-scale items about vigorous and moderate activities, lifting and carrying groceries, walking one block, and bathing or dressing. On the MH sub-scale items, all DIF effects were small or moderate in size. SF-36 PF and MH sub-scale scores were not comparable across population sub-groups defined by demographic and health status variables due to the effects of DIF, although the magnitude of this bias was not large for most items. We recommend testing and adjusting for DIF to ensure comparability of the SF-36 in population-based investigations.
Closing in on the large-scale CMB power asymmetry
NASA Astrophysics Data System (ADS)
Contreras, D.; Hutchinson, J.; Moss, A.; Scott, D.; Zibin, J. P.
2018-03-01
Measurements of the cosmic microwave background (CMB) temperature anisotropies have revealed a dipolar asymmetry in power at the largest scales, in apparent contradiction with the statistical isotropy of standard cosmological models. The significance of the effect is not very high, and is dependent on a posteriori choices. Nevertheless, a number of models have been proposed that produce a scale-dependent asymmetry. We confront several such models for a physical, position-space modulation with CMB temperature observations. We find that, while some models that maintain the standard isotropic power spectrum are allowed, others, such as those with modulated tensor or uncorrelated isocurvature modes, can be ruled out on the basis of the overproduction of isotropic power. This remains the case even when an extra isocurvature mode fully anticorrelated with the adiabatic perturbations is added to suppress power on large scales.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
NASA Astrophysics Data System (ADS)
Peruani, Fernando
2016-11-01
Bacteria, chemically-driven rods, and motility assays are examples of active (i.e. self-propelled) Brownian rods (ABR). The physics of ABR, despite their ubiquity in experimental systems, remains still poorly understood. Here, we review the large-scale properties of collections of ABR moving in a dissipative medium. We address the problem by presenting three different models, of decreasing complexity, which we refer to as model I, II, and III, respectively. Comparing model I, II, and III, we disentangle the role of activity and interactions. In particular, we learn that in two dimensions by ignoring steric or volume exclusion effects, large-scale nematic order seems to be possible, while steric interactions prevent the formation of orientational order at large scales. The macroscopic behavior of ABR results from the interplay between active stresses and local alignment. ABR exhibit, depending on where we locate ourselves in parameter space, a zoology of macroscopic patterns that ranges from polar and nematic bands to dynamic aggregates.
Large Scale Integrated Circuits for Military Applications.
1977-05-01
economic incentive for riarrowing this gap is examined, y (U)^wo"categories of cost are analyzed: the direct life cycle cost of the integrated circuit...dependence of these costs on the physical charac- teristics of the integrated circuits is discussed. (U) The economic and physical characteristics of... economic incentive for narrowing this gap is examined. Two categories of cost are analyzed: the direct life cycle cost of the integrated circuit
Emergency Response Health Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mena, RaJah; Pemberton, Wendy; Beal, William
2012-05-01
Health physics is an important discipline with regard to understanding the effects of radiation on human health; however, there are major differences between health physics for research or occupational safety and health physics during a large-scale radiological emergency. The deployment of a U.S. Department of Energy/National Nuclear Security Administration (DOE/NNSA) monitoring and assessment team to Japan in the wake of the March 2011 accident at Fukushima Daiichi Nuclear Power Plant yielded a wealth of lessons on these difference. Critical teams (CMOC (Consequence Management Outside the Continental U.S.) and CMHT (Consequence Management Home Team) ) worked together to collect, compile, review,more » and analyze radiological data from Japan to support the response needs of and answer questions from the Government of Japan, the U.S. military in Japan, the U.S. Embassy and U.S. citizens in Japan, and U.S. citizens in America. This paper addresses the unique challenges presented to the health physicist or analyst of radiological data in a large-scale emergency. A key lesson learned was that public perception and the availability of technology with social media requires a diligent effort to keep the public informed of the science behind the decisions in a manner that is meaningful to them.« less
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
Generalizing a nonlinear geophysical flood theory to medium-sized river networks
Gupta, Vijay K.; Mantilla, Ricardo; Troutman, Brent M.; Dawdy, David; Krajewski, Witold F.
2010-01-01
The central hypothesis of a nonlinear geophysical flood theory postulates that, given space-time rainfall intensity for a rainfall-runoff event, solutions of coupled mass and momentum conservation differential equations governing runoff generation and transport in a self-similar river network produce spatial scaling, or a power law, relation between peak discharge and drainage area in the limit of large area. The excellent fit of a power law for the destructive flood event of June 2008 in the 32,400-km2 Iowa River basin over four orders of magnitude variation in drainage areas supports the central hypothesis. The challenge of predicting observed scaling exponent and intercept from physical processes is explained. We show scaling in mean annual peak discharges, and briefly discuss that it is physically connected with scaling in multiple rainfall-runoff events. Scaling in peak discharges would hold in a non-stationary climate due to global warming but its slope and intercept would change.
Identifying predictors of physics item difficulty: A linear regression approach
NASA Astrophysics Data System (ADS)
Mesic, Vanes; Muratovic, Hasnija
2011-06-01
Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal physics knowledge structures. Identified predictors point out the fundamental cognitive dimensions of student physics achievement at the end of compulsory education in Bosnia and Herzegovina, whose level of development influenced the test results within the conducted assessments.
Large-Scale Aerosol Modeling and Analysis
2010-09-30
Application of Earth Sciences Products” supports improvements in NAAPS physics and model initialization. The implementation of NAAPS, NAVDAS-AOD, FLAMBE ...Forecasting of Biomass-Burning Smoke: Description of and Lessons From the Fire Locating and Modeling of Burning Emissions ( FLAMBE ) Program, IEEE Journal of
Texas Symposium on Relativistic Astrophysics, 11th, Austin, TX, December 12-17, 1982, Proceedings
NASA Technical Reports Server (NTRS)
Evans, D. S. (Editor)
1984-01-01
Various papers on relativistic astrophysics are presented. The general subjects addressed include: particle physics and astrophysics, general relativity, large-scale structure, big bang cosmology, new-generation telescopes, pulsars, supernovae, high-energy astrophysics, and active galaxies.
Tampa Bay Ecosystem Services webpage
Public website describing research on the large-scale physical, chemical, and biological dynamics of coastal wetlands and estuaries, with emphasis on the Gulf of Mexico. Hyperlinks direct users to mapped ecosystem services of interest and value to Tampa Bay area residents, and i...
ERIC Educational Resources Information Center
Platten, Marvin R.; Williams, Larry R.
1981-01-01
This study largely replicates the findings of a previous study reported by the authors. Further research involving the physical dimension as a possible facet of general self-concept is suggested. (Author/BW)
SUMMARY OF SOLIDIFICATION/STABILIZATION SITE DEMONSTRATIONS AT UNCONTROLLED HAZARDOUS WASTE SITES
Four large-scale solidification/stabilization demonstrations have occurred under EPA's SITE program. In general, physical testing results have been acceptable. Reduction in metal leachability, as determined by the TCLP test, has been observed. Reduction in organic leachability ha...
Bayesian analysis of the dynamic cosmic web in the SDSS galaxy survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclercq, Florent; Wandelt, Benjamin; Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: jasche@iap.fr, E-mail: wandelt@iap.fr
Recent application of the Bayesian algorithm \\textsc(borg) to the Sloan Digital Sky Survey (SDSS) main sample galaxies resulted in the physical inference of the formation history of the observed large-scale structure from its origin to the present epoch. In this work, we use these inferences as inputs for a detailed probabilistic cosmic web-type analysis. To do so, we generate a large set of data-constrained realizations of the large-scale structure using a fast, fully non-linear gravitational model. We then perform a dynamic classification of the cosmic web into four distinct components (voids, sheets, filaments, and clusters) on the basis of themore » tidal field. Our inference framework automatically and self-consistently propagates typical observational uncertainties to web-type classification. As a result, this study produces accurate cosmographic classification of large-scale structure elements in the SDSS volume. By also providing the history of these structure maps, the approach allows an analysis of the origin and growth of the early traces of the cosmic web present in the initial density field and of the evolution of global quantities such as the volume and mass filling fractions of different structures. For the problem of web-type classification, the results described in this work constitute the first connection between theory and observations at non-linear scales including a physical model of structure formation and the demonstrated capability of uncertainty quantification. A connection between cosmology and information theory using real data also naturally emerges from our probabilistic approach. Our results constitute quantitative chrono-cosmography of the complex web-like patterns underlying the observed galaxy distribution.« less
Impact of large-scale dynamics on the microphysical properties of midlatitude cirrus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlbauer, Andreas; Ackerman, Thomas P.; Comstock, Jennifer M.
2014-04-16
In situ microphysical observations 3 of mid-latitude cirrus collected during the Department of Energy Small Particles in Cirrus (SPAR-TICUS) field campaign are combined with an atmospheric state classification for the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site to understand statistical relationships between cirrus microphysics and the large-scale meteorology. The atmospheric state classification is informed about the large-scale meteorology and state of cloudiness at the ARM SGP site by combining ECMWF ERA-Interim reanalysis data with 14 years of continuous observations from the millimeter-wavelength cloud radar. Almost half of the cirrus cloud occurrences in the vicinity of the ARM SGPmore » site during SPARTICUS can be explained by three distinct synoptic condi- tions, namely upper-level ridges, mid-latitude cyclones with frontal systems and subtropical flows. Probability density functions (PDFs) of cirrus micro- physical properties such as particle size distributions (PSDs), ice number con- centrations and ice water content (IWC) are examined and exhibit striking differences among the different synoptic regimes. Generally, narrower PSDs with lower IWC but higher ice number concentrations are found in cirrus sam- pled in upper-level ridges whereas cirrus sampled in subtropical flows, fronts and aged anvils show broader PSDs with considerably lower ice number con- centrations but higher IWC. Despite striking contrasts in the cirrus micro- physics for different large-scale environments, the PDFs of vertical velocity are not different, suggesting that vertical velocity PDFs are a poor predic-tor for explaining the microphysical variability in cirrus. Instead, cirrus mi- crophysical contrasts may be driven by differences in ice supersaturations or aerosols.« less
A CRITICAL ASSESSMENT OF BIODOSIMETRY METHODS FOR LARGE-SCALE INCIDENTS
Swartz, Harold M.; Flood, Ann Barry; Gougelet, Robert M.; Rea, Michael E.; Nicolalde, Roberto J.; Williams, Benjamin B.
2014-01-01
Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event. PMID:20065671
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment
NASA Astrophysics Data System (ADS)
Ritsch, E.; Atlas Collaboration
2014-06-01
The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.
NASA Technical Reports Server (NTRS)
Givi, Peyman; Jaberi, Farhad A.
2001-01-01
The basic objective of this work is to assess the influence of gravity on "the compositional and the spatial structures" of transitional and turbulent diffusion flames via large eddy simulation (LES), and direct numerical simulation (DNS). The DNS is conducted for appraisal of the various closures employed in LES, and to study the effect of buoyancy on the small scale flow features. The LES is based on our "filtered mass density function"' (FMDF) model. The novelty of the methodology is that it allows for reliable simulations with inclusion of "realistic physics." It also allows for detailed analysis of the unsteady large scale flow evolution and compositional flame structure which is not usually possible via Reynolds averaged simulations.
Physical and human dimensions of deforestation in Amazonia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skole, D.L.; Chomentowski, W.H.; Salas W.A.
1994-05-01
In the Brazilian Amazon, regional trends are influenced by large scale external forces but mediated by local conditions. Tropical deforestation has a large influence on global hydrology, climate and biogeochemical cycles, but understanding is inadequate because of a lack of accurate measurements of rate, geographic extent and spatial patterns and lack of insight into its causes including interrelated social, economic and environmental factors. This article proposes an interdisciplinary approach for analyzing tropical deforestation in the Brazilian Amazon. The first part shows how deforestation can be measured from satellite remote sensing and sociodemographic and economic data. The second part proposes anmore » explanatory model, considering the relationship among deforestation and large scale social, economic, and institutional factors. 43 refs., 8 figs.« less
Numerical Investigation of Dual-Mode Scramjet Combustor with Large Upstream Interaction
NASA Technical Reports Server (NTRS)
Mohieldin, T. O.; Tiwari, S. N.; Reubush, David E. (Technical Monitor)
2004-01-01
Dual-mode scramjet combustor configuration with significant upstream interaction is investigated numerically, The possibility of scaling the domain to accelerate the convergence and reduce the computational time is explored. The supersonic combustor configuration was selected to provide an understanding of key features of upstream interaction and to identify physical and numerical issues relating to modeling of dual-mode configurations. The numerical analysis was performed with vitiated air at freestream Math number of 2.5 using hydrogen as the sonic injectant. Results are presented for two-dimensional models and a three-dimensional jet-to-jet symmetric geometry. Comparisons are made with experimental results. Two-dimensional and three-dimensional results show substantial oblique shock train reaching upstream of the fuel injectors. Flow characteristics slow numerical convergence, while the upstream interaction slowly increases with further iterations. As the flow field develops, the symmetric assumption breaks down. A large separation zone develops and extends further upstream of the step. This asymmetric flow structure is not seen in the experimental data. Results obtained using a sub-scale domain (both two-dimensional and three-dimensional) qualitatively recover the flow physics obtained from full-scale simulations. All results show that numerical modeling using a scaled geometry provides good agreement with full-scale numerical results and experimental results for this configuration. This study supports the argument that numerical scaling is useful in simulating dual-mode scramjet combustor flowfields and could provide an excellent convergence acceleration technique for dual-mode simulations.
NASA Astrophysics Data System (ADS)
Webb, James R.
2016-09-01
This book is intended to be a course about the creation and evolution of the universe at large, including the basic macroscopic building blocks (galaxies) and the overall large-scale structure. This text covers a broad range of topics for a graduate-level class in a physics department where students' available credit hours for astrophysics classes are limited. The sections cover galactic structure, external galaxies, galaxy clustering, active galaxies, general relativity and cosmology.
Separating Dark Physics from Physical Darkness: Minimalist Modified Gravity vs. Dark Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huterer, Dragan; Linder, Eric V.
The acceleration of the cosmic expansion may be due to a new component of physical energy density or a modification of physics itself. Mapping the expansion of cosmic scales and the growth of large scale structure in tandem can provide insights to distinguish between the two origins. Using Minimal Modified Gravity (MMG) - a single parameter gravitational growth index formalism to parameterize modified gravity theories - we examine the constraints that cosmological data can place on the nature of the new physics. For next generation measurements combining weak lensing, supernovae distances, and the cosmic microwave background we can extend themore » reach of physics to allow for fitting gravity simultaneously with the expansion equation of state, diluting the equation of state estimation by less than 25percent relative to when general relativity is assumed, and determining the growth index to 8percent. For weak lensing we examine the level of understanding needed of quasi- and nonlinear structure formation in modified gravity theories, and the trade off between stronger precision but greater susceptibility to bias as progressively more nonlinear information is used.« less
Separating dark physics from physical darkness: Minimalist modified gravity versus dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huterer, Dragan; Linder, Eric V.
The acceleration of the cosmic expansion may be due to a new component of physical energy density or a modification of physics itself. Mapping the expansion of cosmic scales and the growth of large scale structure in tandem can provide insights to distinguish between the two origins. Using Minimal Modified Gravity (MMG) - a single parameter gravitational growth index formalism to parametrize modified gravity theories - we examine the constraints that cosmological data can place on the nature of the new physics. For next generation measurements combining weak lensing, supernovae distances, and the cosmic microwave background we can extend themore » reach of physics to allow for fitting gravity simultaneously with the expansion equation of state, diluting the equation of state estimation by less than 25% relative to when general relativity is assumed, and determining the growth index to 8%. For weak lensing we examine the level of understanding needed of quasi- and nonlinear structure formation in modified gravity theories, and the trade off between stronger precision but greater susceptibility to bias as progressively more nonlinear information is used.« less
REVIEW ARTICLE: How will physics be involved in silicon microelectronics
NASA Astrophysics Data System (ADS)
Kamarinos, Georges; Felix, Pierre
1996-03-01
By the year 2000 electronics will probably be the basis of the largest industry in the world. Silicon microelectronics will continue to keep a dominant place covering 99% of the `semiconductor market'. The aim of this review article is to indicate for the next decade the domains in which research work in `physics' is needed for a technological advance towards increasing speed, complexity and density of silicon ultra large scale integration (ULSI) integrated circuits (ICs). By `physics' we mean here not only condensed matter physics but also the basic physical chemistry and thermodynamics. The review begins with a brief and general introduction in which we elucidate the current state of the art and the trends in silicon microelectronics. Afterwards we examine the involvement of physics in silicon microelectronics in the two main sections. The first section concerns the processes of fabrication of ICs: lithography, oxidation, diffusion, chemical and physical vapour deposition, rapid thermal processing, etching, interconnections, ultra-clean processing and microcontamination. The second section concerns the electrical operation of the ULSI devices. It defines the integration scales and points out the importance of the intermediate scale of integration which is the scale of the next generation of ICs. The emergence of cryomicroelectronics is also reviewed and an extended paragraph is dedicated to the problem of reliability and ageing of devices and ICs: hot carrier degradation, interdevice coupling and noise are considered. It is shown, during our analysis, that the next generation of silicon ICs needs mainly: (i) `scientific' fabrication and (ii) microscopic modelling and simulation of the electrical characteristics of the scaled down devices. To attain the above objectives a return to the `first principles' of physics as well as a recourse to nonlinear and non-equilibrium thermodynamics are mandatory. In the references we list numerous review papers and references of specialized colloquia proceedings so that a more detailed survey of the subject is possible for the reader.
NASA Technical Reports Server (NTRS)
Scholl, R. E. (Editor)
1979-01-01
Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.
Advanced spacecraft: What will they look like and why
NASA Technical Reports Server (NTRS)
Price, Humphrey W.
1990-01-01
The next century of spaceflight will witness an expansion in the physical scale of spacecraft, from the extreme of the microspacecraft to the very large megaspacecraft. This will respectively spawn advances in highly integrated and miniaturized components, and also advances in lightweight structures, space fabrication, and exotic control systems. Challenges are also presented by the advent of advanced propulsion systems, many of which require controlling and directing hot plasma, dissipating large amounts of waste heat, and handling very high radiation sources. Vehicle configuration studies for a number of theses types of advanced spacecraft were performed, and some of them are presented along with the rationale for their physical layouts.
Late time cosmological phase transitions 1: Particle physics models and cosmic evolution
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Hill, Christopher T.; Watkins, Richard
1991-01-01
We described a natural particle physics basis for late-time phase transitions in the universe. Such a transition can seed the formation of large-scale structure while leaving a minimal imprint upon the microwave background anisotropy. The key ingredient is an ultra-light pseudo-Nambu-Goldstone boson with an astronomically large (O(kpc-Mpc)) Compton wavelength. We analyze the cosmological signatures of and constraints upon a wide class of scenarios which do not involve domain walls. In addition to seeding structure, coherent ultra-light bosons may also provide unclustered dark matter in a spatially flat universe, omega sub phi approx. = 1.
University of Arizona High Energy Physics Program at the Cosmic Frontier 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
abate, alex; cheu, elliott
This is the final technical report from the University of Arizona High Energy Physics program at the Cosmic Frontier covering the period 2014-2016. The work aims to advance the understanding of dark energy using the Large Synoptic Survey Telescope (LSST). Progress on the engineering design of the power supplies for the LSST camera is discussed. A variety of contributions to photometric redshift measurement uncertainties were studied. The effect of the intergalactic medium on the photometric redshift of very distant galaxies was evaluated. Computer code was developed realizing the full chain of calculations needed to accurately and efficiently run large-scale simulations.
NASA Astrophysics Data System (ADS)
Stallard, R. F.
2011-12-01
The importance of biological processes in controlling weathering, erosion, stream-water composition, soil formation, and overall landscape development is generally accepted. The U.S. Geological Survey (USGS) Water, Energy, and Biogeochemical Budgets (WEBB) Project in eastern Puerto Rico and Panama and the Smithsonian Tropical Research Institute (STRI) Panama Canal Watershed Experiment (PCWE) are landscape-scale studies based in the humid tropics where the warm temperatures, moist conditions, and luxuriant vegetation promote especially rapid biological and chemical processes - photosynthesis, respiration, decay, and chemical weathering. In both studies features of small-watershed, large-watershed, and landscape-scale-biology experiments are blended to satisfy the research needs of the physical and biological sciences. The WEBB Project has successfully synthesized its first fifteen years of data, and has addressed the influence of land cover, geologic, topographic, and hydrologic variability, including huge storms on a wide range of hydrologic, physical, and biogeochemical processes. The ongoing PCWE should provide a similar synthesis of a moderate-sized humid tropical watershed. The PCWE and the Agua Salud Project (ASP) within the PCWE are now addressing the role of land cover (mature forests, pasture, invasive-grass dominated, secondary succession, native species plantation, and teak) at scales ranging from small watersheds to the whole Panama Canal watershed. Biologists have participated in the experimental design at both watershed scales, and small (0.1 ha) to large (50 ha) forest-dynamic plots have a central role in interfacing between physical scientists and biologists. In these plots, repeated, high-resolution mapping of all woody plants greater than 1-cm diameter provides a description of population changes through time presumably reflecting individual life histories, interactions with other organisms and the influence of landscape processes and climate, thereby bridging the research needs and conceptual scales of hydrologists and biogeochemists with those of biologists. Both experiments are embedded in larger data-collection networks: the WEBB within the hydrological and meteorological monitoring programs of the USGS and other federal agencies, and the PCWE in the long-term monitoring conducted by the Panama Canal Authority (ACP), its antecedents, and STRI. Examination of landscape-scale processes in a changing world requires the development of detailed landscape-scale data sets, including a formulation of reference states that can act as surrogate experimental controls. For example, the concept of a landscape steady state provides a convenient reference in which present-day observations can be interpreted. Extreme hydrological states must also be described, and both WEBB and PCWE have successfully examined the role of droughts and large storms and their impact on geomorphology, biogeochemistry, and biology. These experiments also have provided platforms for research endeavors never contemplated in the original objectives, a testament to the importance of developing approaches that consider the needs of physical and biological sciences.
Multiscale approach to the physics of radiation damage with ions
NASA Astrophysics Data System (ADS)
Surdutovich, Eugene; Solov'yov, Andrey V.
2013-04-01
We review a multiscale approach to the physics of ion-beam cancer therapy, an approach suggested in order to understand the interplay of a large number of phenomena involved in radiation damage scenario occurring on a range of temporal, spatial, and energy scales. We briefly overview its history and present the current stage of its development. The differences of the multiscale approach from other methods of understanding and assessment of radiation damage are discussed as well as its relationship to other branches of physics, chemistry and biology.
Describing Ecosystem Complexity through Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.
1999-01-01
In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
2:1 for naturalness at the LHC?
NASA Astrophysics Data System (ADS)
Arkani-Hamed, Nima; Blum, Kfir; D'Agnolo, Raffaele Tito; Fan, JiJi
2013-01-01
A large enhancement of a factor of 1.5 - 2 in Higgs production and decay in the diphoton channel, with little deviation in the ZZ channel, can only plausibly arise from a loop of new charged particles with large couplings to the Higgs. We show that, allowing only new fermions with marginal interactions at the weak scale, the required Yukawa couplings for a factor of 2 enhancement are so large that the Higgs quartic coupling is pushed to large negative values in the UV, triggering an unacceptable vacuum instability far beneath the 10 TeV scale. An enhancement by a factor of 1.5 can be accommodated if the charged particles are lighter than 150 GeV, within reach of discovery in almost all cases in the 8 TeV run at the LHC, and in even the most difficult cases at 14 TeV. Thus if the diphoton enhancement survives further scrutiny, and no charged particles beneath 150 GeV are found, there must be new bosons far beneath the 10 TeV scale. This would unambiguously rule out a large class of fine-tuned theories for physics beyond the Standard Model, including split SUSY and many of its variants, and provide strong circumstantial evidence for a natural theory of electroweak symmetry breaking at the TeV scale. Alternately, theories with only a single fine-tuned Higgs and new fermions at the weak scale, with no additional scalars or gauge bosons up to a cutoff much larger than the 10 TeV scale, unambiguously predict that the hints for a large diphoton enhancement in the current data will disappear.
Feshbach Prize: New Phenomena and New Physics from Strongly-Correlated Quantum Matter
NASA Astrophysics Data System (ADS)
Carlson, Joseph A.
2017-01-01
Strongly correlated quantum matter is ubiquitous in physics from cold atoms to nuclei to the cold dense matter found in neutron stars. Experiments from table-top to the extremely large scale experiments including FRIB and LIGO will help determine the properties of matter across an incredible scale of distances and energies. Questions to be addressed include the existence of exotic states of matter in cold atoms and nuclei, the response of this correlated matter to external probes, and the behavior of matter in extreme astrophysical environments. A more complete understanding is required, both to understand these diverse phenomena and to employ this understanding to probe for new underlying physics in experiments including neutrinoless double beta decay and accelerator neutrino experiments. I will summarize some aspects of our present understanding and highlight several important prospects for the future.
Pioneering University/Industry Venture Explores VLSI Frontiers.
ERIC Educational Resources Information Center
Davis, Dwight B.
1983-01-01
Discusses industry-sponsored programs in semiconductor research, focusing on Stanford University's Center for Integrated Systems (CIS). CIS, while pursuing research in semiconductor very-large-scale integration, is merging the fields of computer science, information science, and physical science. Issues related to these university/industry…
Lai, Hsien-Tang; Kung, Pei-Tseng; Su, Hsun-Pi; Tsai, Wen-Chen
2014-09-01
Limited studies with large samples have been conducted on the utilization of dental calculus scaling among people with physical or mental disabilities. This study aimed to investigate the utilization of dental calculus scaling among the national disabled population. This study analyzed the utilization of dental calculus scaling among the disabled people, using the nationwide data between 2006 and 2008. Descriptive analysis and logistic regression were performed to analyze related influential factors for dental calculus scaling utilization. The dental calculus scaling utilization rate among people with physical or mental disabilities was 16.39%, and the annual utilization frequency was 0.2 times. Utilization rate was higher among the female and non-aboriginal samples. Utilization rate decreased with increased age and disability severity while utilization rate increased with income, education level, urbanization of residential area and number of chronic illnesses. Related influential factors for dental calculus scaling utilization rate were gender, age, ethnicity (aboriginal or non-aboriginal), education level, urbanization of residence area, income, catastrophic illnesses, chronic illnesses, disability types, and disability severity significantly influenced the dental calculus scaling utilization rate. Copyright © 2014 Elsevier Ltd. All rights reserved.
Walker, David; Ellaway, Anne
2018-01-01
Background Large-scale primary data collections are complex, costly, and time-consuming. Study protocols for trial-based research are now commonplace, with a growing number of similar pieces of work being published on observational research. However, useful additions to the literature base are publications that describe the issues and challenges faced while conducting observational studies. These can provide researchers with insightful knowledge that can inform funding proposals or project development work. Objectives In this study, we identify and reflectively discuss the unforeseen or often unpublished issues associated with organizing and implementing a large-scale objectively measured physical activity and global positioning system (GPS) data collection. Methods The SPACES (Studying Physical Activity in Children’s Environments across Scotland) study was designed to collect objectively measured physical activity and GPS data from 10- to 11-year-old children across Scotland, using a postal delivery method. The 3 main phases of the project (recruitment, delivery of project materials, and data collection and processing) are described within a 2-stage framework: (1) intended design and (2) implementation of the intended design. Results Unanticipated challenges arose, which influenced the data collection process; these encompass four main impact categories: (1) cost, budget, and funding; (2) project timeline; (3) participation and engagement; and (4) data challenges. The main unforeseen issues that impacted our timeline included the informed consent process for children under the age of 18 years; the use of, and coordination with, the postal service to deliver study information and equipment; and the variability associated with when participants began data collection and the time taken to send devices and consent forms back (1-12 months). Unanticipated budgetary issues included the identification of some study materials (AC power adapter) not fitting through letterboxes, as well as the employment of fieldworkers to increase recruitment and the return of consent forms. Finally, we encountered data issues when processing physical activity and GPS data that had been initiated across daylight saving time. Conclusions We present learning points and recommendations that may benefit future studies of similar methodology in their early stages of development. PMID:29712624
Emissions of nitrous oxide from biomass burning
NASA Technical Reports Server (NTRS)
Winstead, Edward L.; Cofer, Wesley R., III; Levine, Joel S.
1991-01-01
A study has been conducted which compared N2O results obtained over large prescribed fires or wildfires, in which 'grab-sampling' with storage had been used with N2O measurements made in near-real time. CO2-normalized emission ratios obtained initially from the laboratory fires are substantially lower than those obtained over large-scale biomass fires. Combustion may not be the only source of N2O in large fire smoke plumes; physical, chemical, and biochemical processes in the soil may be altered by large biomass fires, leading to large N2O releases.
NASA Technical Reports Server (NTRS)
Nobre, C. A.
1984-01-01
The climatologies of cloudiness and precipitation for the Amazon, are reviewed and the physical causes of some of the observed features and those which are not well known are explained. The atmospheric circulation over the Amazon is discussed on the large scale tropical circulations forced by deep diabatic heating sources. Weather deforestation which leads to a reduction in evapotranspiration into the atmosphere, and a reduction in precipitation and its implicated for the gobal climate is discussed. It is indicated that a large scale clearing of tropical rainforests there would be a reduction in rainfall which would have global effects on climate and weather both in the tropical and extratropical regions.
Anomalous transport theory for the reversed field pinch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, P.W.; Hegna, C.C; Sovinec, C.R.
1996-09-01
Physically motivated transport models with predictive capabilities and significance beyond the reversed field pinch (RFP) are presented. It is shown that the ambipolar constrained electron heat loss observed in MST can be quantitatively modeled by taking account of the clumping in parallel streaming electrons and the resultant self-consistent interaction with collective modes; that the discrete dynamo process is a relaxation oscillation whose dependence on the tearing instability and profile relaxation physics leads to amplitude and period scaling predictions consistent with experiment; that the Lundquist number scaling in relaxed plasmas driven by magnetic turbulence has a weak S{sup {minus}1/4} scaling; andmore » that radial E{times}B shear flow can lead to large reductions in the edge particle flux with little change in the heat flux, as observed in the RFP and tokamak. 24 refs.« less
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
NASA Astrophysics Data System (ADS)
Sobolowski, Stefan; Chen, Linling; Miles, Victoria
2016-04-01
The outlet glaciers along the margins of the Greenland Ice Sheet (GrIS) exhibit a range of behaviors, which are crucial for understanding GrIS mass changes from a dynamical point of view. However, the drivers of this behavior are still poorly understood. Arguments (counter-arguments) have been made for a strong (weak) local oceanic influence on marine terminating outlet glaciers while decadal-scale drivers linked to fluctuations in the Ice sheet itself and the North Atlantic ocean (e.g. Atlantic Multidecadal Variability) have also been posited as drivers. Recently there have also been studies linking (e.g. seasonal to interannual) atmospheric variability, synoptic activity and the Ice Sheet variability. But these studies typically investigate atmospheric links to the large-scale behavior of the Ice Sheet itself and do not go down to the scale of the outlet glaciers. Conversely, investigations of the outlet glaciers often do not include potential links to non-local atmospheric dynamics. Here the authors attempt to bridge the gap and investigate the relationship between atmospheric variability across a range of scales and the behavior of three outlet glaciers on Greenland's southeast coast over a 33-year period (1980-2012). The glaciers - Helheim, Midgard and Fenris - are near Tasiilaq, are marine terminating and exhibit varying degree of connection to the GrIS. ERA-Interim reanalysis, sea-ice data and glacier observations are used for the investigation. Long records of mass balance are unavailable for these glaciers and front position is employed as a measure of glacier atmosphere interactions across multiple scales, as it exhibits robust relationships to atmospheric variability on time scales of seasons to many years, with the strongest relationships seen at seasonal - interannual time scales. The authors do not make the argument that front position is a suitable proxy for mass balance, only that it is indicative of the role of local and remote atmospheric/climate dynamics in glacier behavior. Our study suggests a strong relationship between large-scale tropospheric circulation patterns, such as the so-called Greenland Blocking Index (GBI), and glacier front position. This relationship is seen in the wintertime (summertime) circulation influence on spring (fall) front position. Dynamically, a physical pathway is illustrated via canonical correlation analyses and composites of low-mid level winds, which show strong southerly advection into the region when the GBI is positive. There are also potential links between local and remote diabatic heating in the atmospheric column, SSTs, sea-ice concentration and front position. Whether there are physical pathways connecting remote surface processes, such as heating along western Greenland is not yet clear. Causality is always difficult to infer in reanalysis-based studies but physical intuition and theory provide multiple lines of evidence, which suggest a substantial influence of large-scale atmospheric dynamics at the margins of the GrIS. Improving our understanding of these physical connections will be crucial, as we know the outlet glaciers will respond under rapidly changing climate conditions.
Large-scale correlations in gas traced by Mg II absorbers around low-mass galaxies
NASA Astrophysics Data System (ADS)
Kauffmann, Guinevere
2018-03-01
The physical origin of the large-scale conformity in the colours and specific star formation rates of isolated low-mass central galaxies and their neighbours on scales in excess of 1 Mpc is still under debate. One possible scenario is that gas is heated over large scales by feedback from active galactic nuclei (AGNs), leading to coherent modulation of cooling and star formation between well-separated galaxies. In this Letter, the metal line absorption catalogue of Zhu & Ménard is used to probe gas out to large projected radii around a sample of a million galaxies with stellar masses ˜1010M⊙ and photometric redshifts in the range 0.4 < z < 0.8 selected from Sloan Digital Sky Survey imaging data. This galaxy sample covers an effective volume of 2.2 Gpc3. A statistically significant excess of Mg II absorbers is present around the red-low-mass galaxies compared to their blue counterparts out to projected radii of 10 Mpc. In addition, the equivalent width distribution function of Mg II absorbers around low-mass galaxies is shown to be strongly affected by the presence of a nearby (Rp < 2 Mpc) radio-loud AGNs out to projected radii of 5 Mpc.
Inflation in the standard cosmological model
NASA Astrophysics Data System (ADS)
Uzan, Jean-Philippe
2015-12-01
The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"
On the role of distributed helicity in the formation of hurricanes.
NASA Astrophysics Data System (ADS)
Golbraikh, E.; Frick, P.; Stepanov, R.
2016-02-01
The problem of formation (suppression) of hurricanes is one of the most important problems in the physics of the atmosphere and ocean. Till now, no clear picture of the hurricanes formation. Many years ago, in the paper [1] has been proposed a model amplification spiral vortex (such as typhoons), based on the hydrodynamic alpha-effect (HAE). However, in contrast to magnetic alpha-effect, the role turbulent helicity in the behavior of the hydrodynamic systems of hitherto considered passive [2], and consequently, this theory has not has been developed. On the other hand, some experimental data and theoretical estimates indicate that the helicity can influence the process of the formation of large-scale vortices. In the present work, based on the theory of the distributed helicity [3], we show that under certain conditions, helicity ceases to be a passive scalar and strongly influences the transfer of energy from the large scale to small, leading to its accumulation on the large scales, with subsequent transfer into a mean flow. At the same time, we suggest that the influence on a hurricane can be carried out only at the stage of its formation, and we discuss of the behavior some of the parameters that are the predictors of the hurricanes occurrence. References [1] Moiseev, S. S., Sagdeev, R. Z., Tur, A. V., Khomenko, Shukurov, A. M, Physical mechanism of amplification of vortex disturbances in the atmosphere, Soviet Physics Doc., Vol. 28, p.926, 11/1983. [2] H. K. Moffat, Magnetic Field Generation in Electrically Conducting Fluids (Cambridge University Press, Cam- bridge, 1978). [3] R. Stepanov, E. Golbraikh, P. Frick, A. Shestakov, Hindered energy cascade in highly helical isotropic turbulence, arXiv:1508.07236v2
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
Production regimes in four eastern boundary current systems
NASA Technical Reports Server (NTRS)
Carr, M. E.; Kearns, E. J.
2003-01-01
High productivity (maxima 3 g C m(sup -2)day(sup -1)) of the Eastern Boundary Currents (EBCs), i.e. the California, Peru-Humboldt, Canary and Benguela Currents, is driven by a combination of local forcing and large-scale circulation. The characteristics of the deep water brought to the surface by upwelling favorable winds depend on the large-scale circulation patterns. Here we use a new hydrographic and nutrient climatology together with satellite measurements ofthe wind vector, sea-surface temperature (SST), chlorophyll concentration, and primary production modeled from ocean color to quantify the meridional and seasonal patterns of upwelling dynamics and biological response. The unprecedented combination of data sets allows us to describe objectively the variability for small regions within each current and to characterize the governing factors for biological production. The temporal and spatial environmental variability was due in most regions to large-scale circulation, alone or in combination with offshore transport (local forcing). The observed meridional and seasonal patterns of biomass and primary production were most highlycorrelated to components representing large-scale circulation. The biomass sustained by a given nutrient concentration in the Atlantic EBCs was twice as large as that of the Pacific EBCs. This apparent greater efficiency may be due toavailability of iron, physical retention, or differences in planktonic community structure.
How institutions shaped the last major evolutionary transition to large-scale human societies
Powers, Simon T.; van Schaik, Carel P.; Lehmann, Laurent
2016-01-01
What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default ‘Hobbesian’ rules of the ‘game of life’, determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter–gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization. PMID:26729937
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
The impact of Lyman-α radiative transfer on large-scale clustering in the Illustris simulation
NASA Astrophysics Data System (ADS)
Behrens, C.; Byrohl, C.; Saito, S.; Niemeyer, J. C.
2018-06-01
Context. Lyman-α emitters (LAEs) are a promising probe of the large-scale structure at high redshift, z ≳ 2. In particular, the Hobby-Eberly Telescope Dark Energy Experiment aims at observing LAEs at 1.9 < z < 3.5 to measure the baryon acoustic oscillation (BAO) scale and the redshift-space distortion (RSD). However, it has been pointed out that the complicated radiative transfer (RT) of the resonant Lyman-α emission line generates an anisotropic selection bias in the LAE clustering on large scales, s ≳ 10 Mpc. This effect could potentially induce a systematic error in the BAO and RSD measurements. Also, there exists a recent claim to have observational evidence of the effect in the Lyman-α intensity map, albeit statistically insignificant. Aims: We aim at quantifying the impact of the Lyman-α RT on the large-scale galaxy clustering in detail. For this purpose, we study the correlations between the large-scale environment and the ratio of an apparent Lyman-α luminosity to an intrinsic one, which we call the "observed fraction", at 2 < z < 6. Methods: We apply our Lyman-α RT code by post-processing the full Illustris simulations. We simply assume that the intrinsic luminosity of the Lyman-α emission is proportional to the star formation rate of galaxies in Illustris, yielding a sufficiently large sample of LAEs to measure the anisotropic selection bias. Results: We find little correlation between large-scale environment and the observed fraction induced by the RT, and hence a smaller anisotropic selection bias than has previously been claimed. We argue that the anisotropy was overestimated in previous work due to insufficient spatial resolution; it is important to keep the resolution such that it resolves the high-density region down to the scale of the interstellar medium, that is, 1 physical kpc. We also find that the correlation can be further enhanced by assumptions in modeling intrinsic Lyman-α emission.
Elasticity of entangled polymer loops: Olympic gels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilgis, T.A.; Otto, M.
1997-08-01
In this Rapid Communication we present a scaling theory for the elasticity of olympic gels, i.e., gels where the elasticity is a consequence of topology only. It is shown that two deformation regimes exist. The first is the nonaffine deformation regime where the free energy scales linear with the deformation. In the large (affine) deformation regime the free energy is shown to scale as F{proportional_to}{lambda}{sup 5/2} where {lambda} is the deformation ratio. Thus a highly non-Hookian stress-strain relation is predicted. {copyright} {ital 1997} {ital The American Physical Society}
NASA Astrophysics Data System (ADS)
Ruaud, M.; Wakelam, V.; Gratier, P.; Bonnell, I. A.
2018-04-01
Aim. We study the effect of large scale dynamics on the molecular composition of the dense interstellar medium during the transition between diffuse to dense clouds. Methods: We followed the formation of dense clouds (on sub-parsec scales) through the dynamics of the interstellar medium at galactic scales. We used results from smoothed particle hydrodynamics (SPH) simulations from which we extracted physical parameters that are used as inputs for our full gas-grain chemical model. In these simulations, the evolution of the interstellar matter is followed for 50 Myr. The warm low-density interstellar medium gas flows into spiral arms where orbit crowding produces the shock formation of dense clouds, which are held together temporarily by the external pressure. Results: We show that depending on the physical history of each SPH particle, the molecular composition of the modeled dense clouds presents a high dispersion in the computed abundances even if the local physical properties are similar. We find that carbon chains are the most affected species and show that these differences are directly connected to differences in (1) the electronic fraction, (2) the C/O ratio, and (3) the local physical conditions. We argue that differences in the dynamical evolution of the gas that formed dense clouds could account for the molecular diversity observed between and within these clouds. Conclusions: This study shows the importance of past physical conditions in establishing the chemical composition of the dense medium.
Optimisation Of a Magnetostrictive Wave Energy Converter
NASA Astrophysics Data System (ADS)
Mundon, T. R.; Nair, B.
2014-12-01
Oscilla Power, Inc. (OPI) is developing a patented magnetostrictive wave energy converter aimed at reducing the cost of grid-scale electricity from ocean waves. Designed to operate cost-effectively across a wide range of wave conditions, this will be the first use of reverse magnetostriction for large-scale energy production. The device architecture is a straightforward two-body, point absorbing system that has been studied at length by various researchers. A large surface float is anchored to a submerged heave (reaction) plate by multiple taut tethers that are largely made up of discrete, robust power takeoff modules that house the magnetostrictive generators. The unique generators developed by OPI utilize the phenomenon of reverse magnetostriction, which through the application of load to a specific low cost alloy, can generate significant magnetic flux changes, and thus create power through electromagnetic induction. Unlike traditional generators, the mode of operation is low-displacement, high-force, high damping which in combination with the specific multi-tether configuration creates some unique effects and interesting optimization challenges. Using an empirical approach with a combination of numerical tools, such as ORCAFLEX, and physical models, we investigated the properties and sensitivities of this system arrangement, including various heave plate geometries, with the overall goal of identifying the mass and hydrodynamic parameters required for optimum performance. Furthermore, through a detailed physical model test program at the University of New Hampshire, we were able to study in more detail how the heave plate geometry affects the drag and added mass coefficients. In presenting this work we will discuss how alternate geometries could be used to optimize the hydrodynamic parameters of the heave plate, allowing maximum inertial forces in operational conditions, while simultaneously minimizing the forces generated in extreme waves. This presentation will cover the significant findings from this research, including physical model results and identified sensitivity parameters. In addition, we will discuss some preliminary results from our large-scale ocean trial conducted in August & September of this year.
A Commercialization Roadmap for Carbon-Negative Energy Systems
NASA Astrophysics Data System (ADS)
Sanchez, D.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.
A tilted cold dark matter cosmological scenario
NASA Technical Reports Server (NTRS)
Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.
1992-01-01
A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, Davi C.; Piattella, Oliver F.; Chauvineau, Bertrand, E-mail: davi.rodrigues@cosmo-ufes.org, E-mail: Bertrand.Chauvineau@oca.eu, E-mail: oliver.piattella@pq.cnpq.br
We show that Renormalization Group extensions of the Einstein-Hilbert action for large scale physics are not, in general, a particular case of standard Scalar-Tensor (ST) gravity. We present a new class of ST actions, in which the potential is not necessarily fixed at the action level, and show that this extended ST theory formally contains the Renormalization Group case. We also propose here a Renormalization Group scale setting identification that is explicitly covariant and valid for arbitrary relativistic fluids.
NASA Astrophysics Data System (ADS)
Beech, M.
1989-02-01
The author discusses some of the more recent research on fractal astronomy and results presented in several astronomical studies. First, the large-scale structure of the universe is considered, while in another section one drops in scale to examine some of the smallest bodies in our solar system; the comets and meteoroids. The final section presents some thoughts on what influence the fractal ideology might have on astronomy, focusing particularly on the question recently raised by Kadanoff, "Fractals: where's the physics?"
Particle acceleration, transport and turbulence in cosmic and heliospheric physics
NASA Technical Reports Server (NTRS)
Matthaeus, W.
1992-01-01
In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.
NASA Astrophysics Data System (ADS)
Kruijssen, J. M. Diederik; Schruba, Andreas; Hygate, Alexander P. S.; Hu, Chia-Yu; Haydon, Daniel T.; Longmore, Steven N.
2018-05-01
The cloud-scale physics of star formation and feedback represent the main uncertainty in galaxy formation studies. Progress is hampered by the limited empirical constraints outside the restricted environment of the Local Group. In particular, the poorly-quantified time evolution of the molecular cloud lifecycle, star formation, and feedback obstructs robust predictions on the scales smaller than the disc scale height that are resolved in modern galaxy formation simulations. We present a new statistical method to derive the evolutionary timeline of molecular clouds and star-forming regions. By quantifying the excess or deficit of the gas-to-stellar flux ratio around peaks of gas or star formation tracer emission, we directly measure the relative rarity of these peaks, which allows us to derive their lifetimes. We present a step-by-step, quantitative description of the method and demonstrate its practical application. The method's accuracy is tested in nearly 300 experiments using simulated galaxy maps, showing that it is capable of constraining the molecular cloud lifetime and feedback time-scale to <0.1 dex precision. Access to the evolutionary timeline provides a variety of additional physical quantities, such as the cloud-scale star formation efficiency, the feedback outflow velocity, the mass loading factor, and the feedback energy or momentum coupling efficiencies to the ambient medium. We show that the results are robust for a wide variety of gas and star formation tracers, spatial resolutions, galaxy inclinations, and galaxy sizes. Finally, we demonstrate that our method can be applied out to high redshift (z≲ 4) with a feasible time investment on current large-scale observatories. This is a major shift from previous studies that constrained the physics of star formation and feedback in the immediate vicinity of the Sun.
Fabricant, Peter D; Robles, Alex; McLaren, Son H; Marx, Robert G; Widmann, Roger F; Green, Daniel W
2014-05-01
An eight-item activity scale was recently developed and validated for use as a prognostic tool in clinical research in children and adolescents. It is unclear, however, if this brief questionnaire is predictive of quantitative metrics of physical activity and fitness. The purposes of this study were to prospectively administer the Hospital for Special Surgery Pediatric Functional Activity Brief Scale to a large cohort of healthy adolescents to determine (1) if the activity scale exhibits any floor or ceiling effects; (2) if scores on the activity scale are correlated with standardized physical fitness metrics; and if so, (3) to determine the discrimination ability of the activity scale to differentiate between adolescents with healthy or unhealthy levels of aerobic capacity and calculate an appropriate cutoff value for its use as a screening tool. One hundred eighty-two adolescents (mean, 15.3 years old) prospectively completed the activity scale and four standardized metrics of physical fitness: pushups, sit-ups, shuttle run exercise (Progressive Aerobic Cardiovascular Endurance Run), and calculated VO2-max. Age, sex, and body mass index were also recorded. Pearson correlations, regression analyses, and receiver operating characteristic analyses were used to evaluate activity scale performance. The activity scale did not exhibit any floor or ceiling effects. Pushups (ρ = 0.28), sit-ups (ρ = 0.23), performance on the Progressive Aerobic Cardiovascular Endurance Run (ρ = 0.44), and VO2-max (ρ = 0.43) were all positively correlated with the activity scale score (Pearson correlations, all p < 0.001). Receiver operating characteristic analysis revealed that those with an activity score of ≤ 14 were at higher risk of having low levels of aerobic capacity. In the current study, activity score was free of floor and ceiling effects and predictive of all four physical fitness metrics. An activity score of ≤ 14 was associated with at-risk aerobic capacity previously shown to be associated with an increased risk of metabolic syndrome. This study is the first to prospectively validate an activity questionnaire against quantitative physical fitness assessments and provides further evidence substantiating its use in outcomes research and screening for healthy levels of childhood activity and fitness. Level I, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.
Natural disturbance production functions
Jeffrey P. Prestemon; D. Evan Mercer; John M. Pye
2008-01-01
Natural disturbances in forests are driven by physical and biological processes. Large, landscape scale disturbances derive primarily from weather (droughts, winds, ice storms, and floods), geophysical activities (earthquakes, volcanic eruptions), fires, insects, and diseases. Humans have invented ways to minimize their negative impacts and reduce their rates of...
Factors Influencing College Science Success
ERIC Educational Resources Information Center
Tai, Robert H.; Sadler, Philip M.; Mintzes, Joel J.
2006-01-01
In this paper, the authors report some of the salient findings of a large-scale, four-year national study, conducted at the Harvard-Smithsonian Center for Astrophysics, entitled "Factors Influencing College Science Success" (FICSS), which surveyed college students who enrolled in first-year biology, chemistry, and physics courses…
Dynamics and energetics of the South Pacific convergence zone during FGGE SOP-1
NASA Technical Reports Server (NTRS)
Vincent, D. G.; Robertson, F. R.
1984-01-01
The major objectives are to: (1) diagnose the physical processes responsible for the maintenance of the South Pacific Convergence Zone (SPCZ); and (2) examine the role of the SPCZ in the large-scale circulation patterns of the Southern Hemisphere.
Macrophytes: Freshwater Forests of Lakes and Rivers.
ERIC Educational Resources Information Center
McDermid, Karla J.; Naiman, Robert J.
1983-01-01
Physical, chemical, and biological effects on macrophytes (aquatic plants) on the freshwater ecosystem are discussed. Research questions and issues related to these organisms are also discussed, including adaptations for survival in a wet environment, ecological consequences of large-scale macrophyte eradication, seasonal changes in plant…
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
NASA Astrophysics Data System (ADS)
Draper, Martin; Usera, Gabriel
2015-04-01
The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of neutrally stratified atmospheric boundary layers over heterogeneous terrain". Water Resources Research, 2006, 42, WO1409 (18 p). [4] J. Keissl, M. Parlange, C. Meneveau. "Field experimental study of dynamic Smagorinsky models in the atmospheric surface layer". Journal of the Atmospheric Science, 2004, 61, 2296-2307. [5] E. Bou-Zeid, N. Vercauteren, M.B. Parlange, C. Meneveau. "Scale dependence of subgrid-scale model coefficients: An a priori study". Physics of Fluids, 2008, 20, 115106. [6] G. Kirkil, J. Mirocha, E. Bou-Zeid, F.K. Chow, B. Kosovic, "Implementation and evaluation of dynamic subfilter - scale stress models for large - eddy simulation using WRF". Monthly Weather Review, 2012, 140, 266-284. [7] S. Radhakrishnan, U. Piomelli. "Large-eddy simulation of oscillating boundary layers: model comparison and validation". Journal of Geophysical Research, 2008, 113, C02022. [8] G. Usera, A. Vernet, J.A. Ferré. "A parallel block-structured finite volume method for flows in complex geometry with sliding interfaces". Flow, Turbulence and Combustion, 2008, 81, 471-495. [9] Y-T. Wu, F. Porté-Agel. "Large-eddy simulation of wind-turbine wakes: evaluation of turbine parametrisations". BoundaryLayerMeteorology, 2011, 138, 345-366.
Simulating Astrophysical Jets with Inertial Confinement Fusion Machines
NASA Astrophysics Data System (ADS)
Blue, Brent
2005-10-01
Large-scale directional outflows of supersonic plasma, also known as `jets', are ubiquitous phenomena in astrophysics. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Modeling Supernova Shocks with Intense Lasers.
NASA Astrophysics Data System (ADS)
Blue, Brent
2006-04-01
Large-scale directional outflows of supersonic plasma are ubiquitous phenomena in astrophysics, with specific application to supernovae. The traditional approach to understanding such phenomena is through theoretical analysis and numerical simulations. However, theoretical analysis might not capture all the relevant physics and numerical simulations have limited resolution and fail to scale correctly in Reynolds number and perhaps other key dimensionless parameters. Recent advances in high energy density physics using large inertial confinement fusion devices now allow controlled laboratory experiments on macroscopic volumes of plasma of direct relevance to astrophysics. This talk will present an overview of these facilities as well as results from current laboratory astrophysics experiments designed to study hydrodynamic jets and Rayleigh-Taylor mixing. This work is performed under the auspices of the U. S. DOE by Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48, Los Alamos National Laboratory under Contract No. W-7405-ENG-36, and the Laboratory for Laser Energetics under Contract No. DE-FC03-92SF19460.
Predicting viscous-range velocity gradient dynamics in large-eddy simulations of turbulence
NASA Astrophysics Data System (ADS)
Johnson, Perry; Meneveau, Charles
2017-11-01
The details of small-scale turbulence are not directly accessible in large-eddy simulations (LES), posing a modeling challenge because many important micro-physical processes depend strongly on the dynamics of turbulence in the viscous range. Here, we introduce a method for coupling existing stochastic models for the Lagrangian evolution of the velocity gradient tensor with LES to simulate unresolved dynamics. The proposed approach is implemented in LES of turbulent channel flow and detailed comparisons with DNS are carried out. An application to modeling the fate of deformable, small (sub-Kolmogorov) droplets at negligible Stokes number and low volume fraction with one-way coupling is carried out. These results illustrate the ability of the proposed model to predict the influence of small scale turbulence on droplet micro-physics in the context of LES. This research was made possible by a graduate Fellowship from the National Science Foundation and by a Grant from The Gulf of Mexico Research Initiative.
NASA Astrophysics Data System (ADS)
Longair, Malcolm S.
2013-04-01
Part I. Stars and Stellar Evolution up to the Second World War: 1. The legacy of the nineteenth century; 2. The classification of stellar spectra; 3. Stellar structure and evolution; 4. The end points of stellar evolution; Part II. The Large-Scale Structure of the Universe, 1900-1939: 5. The Galaxy and the nature of spiral nebulae; 6. The origins of astrophysical cosmology; Part III. The Opening up of the Electromagnetic Spectrum: 7. The opening up of the electromagnetic spectrum and the new astronomies; Part IV. The Astrophysics of Stars and Galaxies since 1945: 8. Stars and stellar evolution; 9. The physics of the interstellar medium; 10. The physics of galaxies and clusters of galaxies; 11. High-energy astrophysics; Part V. Astrophysical Cosmology since 1945: 12. Astrophysical cosmology; 13. The determination of cosmological parameters; 14. The evolution of galaxies and active galaxies with cosmic epoch; 15. The origin of galaxies and the large-scale structure of the Universe; 16. The very early Universe; References; Name index; Object index; Subject index.
Solution-Processed Metal Coating to Nonwoven Fabrics for Wearable Rechargeable Batteries.
Lee, Kyulin; Choi, Jin Hyeok; Lee, Hye Moon; Kim, Ki Jae; Choi, Jang Wook
2017-12-27
Wearable rechargeable batteries require electrode platforms that can withstand various physical motions, such as bending, folding, and twisting. To this end, conductive textiles and paper have been highlighted, as their porous structures can accommodate the stress built during various physical motions. However, fabrics with plain weaves or knit structures have been mostly adopted without exploration of nonwoven counterparts. Also, the integration of conductive materials, such as carbon or metal nanomaterials, to achieve sufficient conductivity as current collectors is not well-aligned with large-scale processing in terms of cost and quality control. Here, the superiority of nonwoven fabrics is reported in electrochemical performance and bending capability compared to currently dominant woven counterparts, due to smooth morphology near the fiber intersections and the homogeneous distribution of fibers. Moreover, solution-processed electroless deposition of aluminum and nickel-copper composite is adopted for cathodes and anodes, respectively, demonstrating the large-scale feasibility of conductive nonwoven platforms for wearable rechargeable batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The cosmic spiderweb: equivalence of cosmic, architectural and origami tessellations
Hidding, Johan; Konstantatou, Marina; van de Weygaert, Rien
2018-01-01
For over 20 years, the term ‘cosmic web’ has guided our understanding of the large-scale arrangement of matter in the cosmos, accurately evoking the concept of a network of galaxies linked by filaments. But the physical correspondence between the cosmic web and structural engineering or textile ‘spiderwebs’ is even deeper than previously known, and also extends to origami tessellations. Here, we explain that in a good structure-formation approximation known as the adhesion model, threads of the cosmic web form a spiderweb, i.e. can be strung up to be entirely in tension. The correspondence is exact if nodes sampling voids are included, and if structure is excluded within collapsed regions (walls, filaments and haloes), where dark-matter multistreaming and baryonic physics affect the structure. We also suggest how concepts arising from this link might be used to test cosmological models: for example, to test for large-scale anisotropy and rotational flows in the cosmos. PMID:29765637
NASA Astrophysics Data System (ADS)
Hardiman, B. S.; Atkins, J.; Dahlin, K.; Fahey, R. T.; Gough, C. M.
2016-12-01
Canopy physical structure - leaf quantity and arrangement - strongly affects light interception and distribution. As such, canopy physical structure is a key driver of forest carbon (C) dynamics. Terrestrial lidar systems (TLS) provide spatially explicit, quantitative characterizations of canopy physical structure at scales commensurate with plot-scale C cycling processes. As an example, previous TLS-based studies established that light use efficiency is positively correlated with canopy physical structure, influencing the trajectory of net primary production throughout forest development. Linking TLS measurements of canopy structure to multispectral satellite observations of forest canopies may enable scaling of ecosystem C cycling processes from leaves to continents. We will report on our study relating a suite of canopy structural metrics to well-established remotely sensed measurements (NDVI, EVI, albedo, tasseled cap indices, etc.) which are indicative of important forest characteristics (leaf area, canopy nitrogen, light interception, etc.). We used Landsat data, which provides observations at 30m resolution, a scale comparable to that of TLS. TLS data were acquired during 2009-2016 from forest sites throughout Eastern North America, comprised primarily of NEON and Ameriflux sites. Canopy physical structure data were compared with contemporaneous growing-season Landsat data. Metrics of canopy physical structure are expected to covary with forest composition and dominant PFT, likely influencing interaction strength between TLS and Landsat canopy metrics. More structurally complex canopies (those with more heterogeneous distributions of leaf area) are expected to have lower albedo, suggesting greater canopy light absorption (higher fAPAR) than simpler canopies. We expect that vegetation indices (NDVI, EVI) will increase with TLS metrics of spatial heterogeneity, and not simply quantity, of leaves, supporting our hypothesis that canopy light absorption is dependent on both leaf quantity and arrangement. Relating satellite observations of canopy properties to TLS metrics of canopy physical structure represents an important advance for modelling canopy energy balance and forest C cycling processes at large spatial scales.
Speedy routing recovery protocol for large failure tolerance in wireless sensor networks.
Lee, Joa-Hyoung; Jung, In-Bum
2010-01-01
Wireless sensor networks are expected to play an increasingly important role in data collection in hazardous areas. However, the physical fragility of a sensor node makes reliable routing in hazardous areas a challenging problem. Because several sensor nodes in a hazardous area could be damaged simultaneously, the network should be able to recover routing after node failures over large areas. Many routing protocols take single-node failure recovery into account, but it is difficult for these protocols to recover the routing after large-scale failures. In this paper, we propose a routing protocol, referred to as ARF (Adaptive routing protocol for fast Recovery from large-scale Failure), to recover a network quickly after failures over large areas. ARF detects failures by counting the packet losses from parent nodes, and upon failure detection, it decreases the routing interval to notify the neighbor nodes of the failure. Our experimental results indicate that ARF could provide recovery from large-area failures quickly with less packets and energy consumption than previous protocols.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
Universal statistics of vortex tangles in three-dimensional random waves
NASA Astrophysics Data System (ADS)
Taylor, Alexander J.
2018-02-01
The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Subhamoy; Mandal, Sudip; Banerjee, Dipankar, E-mail: dipu@iiap.res.in
The Ca ii K spectroheliograms spanning over a century (1907–2007) from Kodaikanal Solar Observatory, India, have recently been digitized and calibrated. Applying a fully automated algorithm (which includes contrast enhancement and the “Watershed method”) to these data, we have identified the supergranules and calculated the associated parameters, such as scale, circularity, and fractal dimension. We have segregated the quiet and active regions and obtained the supergranule parameters separately for these two domains. In this way, we have isolated the effect of large-scale and small-scale magnetic fields on these structures and find a significantly different behavior of the supergranule parameters overmore » solar cycles. These differences indicate intrinsic changes in the physical mechanism behind the generation and evolution of supergranules in the presence of small-scale and large-scale magnetic fields. This also highlights the need for further studies using solar dynamo theory along with magneto-convection models.« less
NASA Astrophysics Data System (ADS)
Allen, Rob
2016-09-01
Structures within molecules and nuclei have relationships to astronomical patterns. The COBE cosmic scale plots, and large scale surveys of galaxy clusters have patterns also repeating and well known at atomic scales. The Induction, Strong Force, and Nuclear Binding Energy Periods within the Big Bang are revealed to have played roles in the formation of these large scale distributions. Equations related to the enormous patterns also model chemical bonds and likely nucleus and nucleon substructures. ratios of the forces that include gravity are accurately calculated from the distributions and shapes. In addition, particle masses and a great many physical constants can be derived with precision and accuracy from astrophysical shapes. A few very basic numbers can do modelling from nucleon internals to molecules to super novae, and up to the Visible Universe. Equations are also provided along with possible structural configurations for some Cold Dark Matter and Dark Energy.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
NASA Astrophysics Data System (ADS)
Watts, Duncan; CLASS Collaboration
2018-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.
May turbulence and fossil turbulence lead to life in the universe?
NASA Astrophysics Data System (ADS)
Gibson, Carl H.
2013-01-01
Turbulence is defined as an eddy-like state of fluid motion where the inertial-vortex forces of the eddies are larger than all the other forces that tend to damp the eddies out. Fossil turbulence is a perturbation produced by turbulence that persists after the fluid ceases to be turbulent at the scale of the perturbation. Because vorticity is produced at small scales, turbulence cascades from small scales to large, providing a consistent physical basis for Kolmogorovian universal similarity laws. Oceanic and astrophysical mixing and diffusion are dominated by fossil turbulence and fossil turbulent waves. Observations from space telescopes show turbulence existed in the beginning of the universe and that its fossils still persist. Fossils of big bang turbulence include a preferred large-scale spin direction, large scale microwave temperature anisotropy patterns, and the dominant dark matter of all galaxies; that is, clumps of ~10^12 frozen hydrogen earth-mass planets that make stars and globular-star-clusters when gravitationally agitated. When the planets were hot gas, we can speculate that they hosted the formation of the first life in a seeded cosmic organic-chemical soup of hot- water oceans as planets merged to form and over-feed the first stars.
Taking Physics and Now the Stars on the Road With the Magic Physics Bus
NASA Astrophysics Data System (ADS)
Bennum, David
2009-05-01
In February 2003 the ``Physics on the Road'' workshop, held at Colorado State University- Fort Collins, Colorado, brought together physics faculty who were experienced in designing and providing year --round mobile physics displays and those who were interested in initiating similar outreach programs. The impetus for the workshop was the upcoming ``World Year of Physics'', but the workshop had much broader impact for many of us who attended. The University of Nevada had a long history of demonstration shows for campus visitors from K-12 students/faculty but the cost of field trips began to limit this for many schools, especially for schools in poorer neighborhoods without large scale parental fundraising. The timing of the workshop was perfect for my developing program to utilize a donated ``electric bus'' as a traveling physics demo showcase. The program has grown to near our current limitations (70 mile range of the bus and time considerations), however we are expanding the ``scope'' of the project to include evening astronomy ``star parties'' as we enter the ``Year of Astronomy''. In addition to the bus transport of portable astronomy equipment to school sites we are adding, through donation, a 22 inch telescope in a domed observatory at a secondary campus location at the edge of Reno where large scale ``star parties'' can be conducted as outreach to K-12 and the community. The ``Physics on the Road'' bus reaches several thousand elementary and middle school students every year now and the potential for similar outreach with ``Stars on the Road'' has excited several of our faculty and physics students into increased participation in these endeavors to introduce our young people to science. It has become one of our most active ``recruitment'' plans and growing numbers of local students entering physics and other science majors is anecdotal evidence of success.
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Bonnay, Patrick
2014-01-01
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Large-scale dynamics associated with clustering of extratropical cyclones affecting Western Europe
NASA Astrophysics Data System (ADS)
Pinto, Joaquim G.; Gómara, Iñigo; Masato, Giacomo; Dacre, Helen F.; Woollings, Tim; Caballero, Rodrigo
2015-04-01
Some recent winters in Western Europe have been characterized by the occurrence of multiple extratropical cyclones following a similar path. The occurrence of such cyclone clusters leads to large socio-economic impacts due to damaging winds, storm surges, and floods. Recent studies have statistically characterized the clustering of extratropical cyclones over the North Atlantic and Europe and hypothesized potential physical mechanisms responsible for their formation. Here we analyze 4 months characterized by multiple cyclones over Western Europe (February 1990, January 1993, December 1999, and January 2007). The evolution of the eddy driven jet stream, Rossby wave-breaking, and upstream/downstream cyclone development are investigated to infer the role of the large-scale flow and to determine if clustered cyclones are related to each other. Results suggest that optimal conditions for the occurrence of cyclone clusters are provided by a recurrent extension of an intensified eddy driven jet toward Western Europe lasting at least 1 week. Multiple Rossby wave-breaking occurrences on both the poleward and equatorward flanks of the jet contribute to the development of these anomalous large-scale conditions. The analysis of the daily weather charts reveals that upstream cyclone development (secondary cyclogenesis, where new cyclones are generated on the trailing fronts of mature cyclones) is strongly related to cyclone clustering, with multiple cyclones developing on a single jet streak. The present analysis permits a deeper understanding of the physical reasons leading to the occurrence of cyclone families over the North Atlantic, enabling a better estimation of the associated cumulative risk over Europe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonne, François; Bonnay, Patrick; Alamir, Mazen
2014-01-29
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection,more » to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.« less
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-01-01
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems. PMID:27548171
Polarization of the prompt gamma-ray emission from the gamma-ray burst of 6 December 2002.
Coburn, Wayne; Boggs, Steven E
2003-05-22
Observations of the afterglows of gamma-ray bursts (GRBs) have revealed that they lie at cosmological distances, and so correspond to the release of an enormous amount of energy. The nature of the central engine that powers these events and the prompt gamma-ray emission mechanism itself remain enigmatic because, once a relativistic fireball is created, the physics of the afterglow is insensitive to the nature of the progenitor. Here we report the discovery of linear polarization in the prompt gamma-ray emission from GRB021206, which indicates that it is synchrotron emission from relativistic electrons in a strong magnetic field. The polarization is at the theoretical maximum, which requires a uniform, large-scale magnetic field over the gamma-ray emission region. A large-scale magnetic field constrains possible progenitors to those either having or producing organized fields. We suggest that the large magnetic energy densities in the progenitor environment (comparable to the kinetic energy densities of the fireball), combined with the large-scale structure of the field, indicate that magnetic fields drive the GRB explosion.
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
A transparently scalable visualization architecture for exploring the universe.
Fu, Chi-Wing; Hanson, Andrew J
2007-01-01
Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.
Subgrid-scale Condensation Modeling for Entropy-based Large Eddy Simulations of Clouds
NASA Astrophysics Data System (ADS)
Kaul, C. M.; Schneider, T.; Pressel, K. G.; Tan, Z.
2015-12-01
An entropy- and total water-based formulation of LES thermodynamics, such as that used by the recently developed code PyCLES, is advantageous from physical and numerical perspectives. However, existing closures for subgrid-scale thermodynamic fluctuations assume more traditional choices for prognostic thermodynamic variables, such as liquid potential temperature, and are not directly applicable to entropy-based modeling. Since entropy and total water are generally nonlinearly related to diagnosed quantities like temperature and condensate amounts, neglecting their small-scale variability can lead to bias in simulation results. Here we present the development of a subgrid-scale condensation model suitable for use with entropy-based thermodynamic formulations.
Patterns of streamflow variability are likely to be a major organizing feature of the habitat template for stream fishes. Functional organization of stream communities has been linked to streamflow, especially to patterns of flow variability that describe the physical disturbanc...
Patterns of streamflow variability are likely to be a major organizing feature of the habitat template for stream fishes. Ecological organization of stream communities has been linked to streamflow, especially to patterns of flow variability that describe the physical disturbanc...
Urban Elementary STEM Initiative
ERIC Educational Resources Information Center
Parker, Carolyn; Abel, Yolanda; Denisova, Ekaterina
2015-01-01
The new standards for K-12 science education suggest that student learning should be more integrated and should focus on crosscutting concepts and core ideas from the areas of physical science, life science, Earth/space science, and engineering/technology. This paper describes large-scale, urban elementary-focused science, technology, engineering,…
NASA Astrophysics Data System (ADS)
Caracas, R.; Stewart, S. T.
2018-05-01
We employ large-scale first-principles molecular dynamics simulations to understand the physical and chemical behavior of the evolution of the molten protolunar disk from its formation all the way to the crystallization of the magma ocean.
Precipitation Dynamical Downscaling Over the Great Plains
NASA Astrophysics Data System (ADS)
Hu, Xiao-Ming; Xue, Ming; McPherson, Renee A.; Martin, Elinor; Rosendahl, Derek H.; Qiao, Lei
2018-02-01
Detailed, regional climate projections, particularly for precipitation, are critical for many applications. Accurate precipitation downscaling in the United States Great Plains remains a great challenge for most Regional Climate Models, particularly for warm months. Most previous dynamic downscaling simulations significantly underestimate warm-season precipitation in the region. This study aims to achieve a better precipitation downscaling in the Great Plains with the Weather Research and Forecast (WRF) model. To this end, WRF simulations with different physics schemes and nudging strategies are first conducted for a representative warm season. Results show that different cumulus schemes lead to more pronounced difference in simulated precipitation than other tested physics schemes. Simply choosing different physics schemes is not enough to alleviate the dry bias over the southern Great Plains, which is related to an anticyclonic circulation anomaly over the central and western parts of continental U.S. in the simulations. Spectral nudging emerges as an effective solution for alleviating the precipitation bias. Spectral nudging ensures that large and synoptic-scale circulations are faithfully reproduced while still allowing WRF to develop small-scale dynamics, thus effectively suppressing the large-scale circulation anomaly in the downscaling. As a result, a better precipitation downscaling is achieved. With the carefully validated configurations, WRF downscaling is conducted for 1980-2015. The downscaling captures well the spatial distribution of monthly climatology precipitation and the monthly/yearly variability, showing improvement over at least two previously published precipitation downscaling studies. With the improved precipitation downscaling, a better hydrological simulation over the trans-state Oologah watershed is also achieved.
NASA Astrophysics Data System (ADS)
Martin, G. M.; Peyrillé, P.; Roehrig, R.; Rio, C.; Caian, M.; Bellon, G.; Codron, F.; Lafore, J.-P.; Poan, D. E.; Idelkadi, A.
2017-03-01
Vertical and horizontal distributions of diabatic heating in the West African monsoon (WAM) region as simulated by four model families are analyzed in order to assess the physical processes that affect the WAM circulation. For each model family, atmosphere-only runs of their CMIP5 configurations are compared with more recent configurations which are on the development path toward CMIP6. The various configurations of these models exhibit significant differences in their heating/moistening profiles, related to the different representation of physical processes such as boundary layer mixing, convection, large-scale condensation and radiative heating/cooling. There are also significant differences in the models' simulation of WAM rainfall patterns and circulations. The weaker the radiative cooling in the Saharan region, the larger the ascent in the rainband and the more intense the monsoon flow, while the latitude of the rainband is related to heating in the Gulf of Guinea region and on the northern side of the Saharan heat low. Overall, this work illustrates the difficulty experienced by current climate models in representing the characteristics of monsoon systems, but also that we can still use them to understand the interactions between local subgrid physical processes and the WAM circulation. Moreover, our conclusions regarding the relationship between errors in the large-scale circulation of the WAM and the structure of the heating by small-scale processes will motivate future studies and model development.
Temperature structure and kinematics of the IRDC G035.39-00.33
NASA Astrophysics Data System (ADS)
Sokolov, Vlas; Wang, Ke; Pineda, Jaime E.; Caselli, Paola; Henshaw, Jonathan D.; Tan, Jonathan C.; Fontani, Francesco; Jiménez-Serra, Izaskun; Lim, Wanggi
2017-10-01
Aims: Infrared dark clouds represent the earliest stages of high-mass star formation. Detailed observations of their physical conditions on all physical scales are required to improve our understanding of their role in fueling star formation. Methods: We investigate the large-scale structure of the IRDC G035.39-00.33, probing the dense gas with the classical ammonia thermometer. This allows us to put reliable constraints on the temperature of the extended, pc-scale dense gas reservoir and to probe the magnitude of its non-thermal motions. Available far-infrared observations can be used in tandem with the observed ammonia emission to estimate the total gas mass contained in G035.39-00.33. Results: We identify a main velocity component as a prominent filament, manifested as an ammonia emission intensity ridge spanning more than 6 pc, consistent with the previous studies on the Northern part of the cloud. A number of additional line-of-sight components are found, and a large-scale linear velocity gradient of 0.2km s-1 pc-1 is found along the ridge of the IRDC. In contrast to the dust temperature map, an ammonia-derived kinetic temperature map, presented for the entirety of the cloud, reveals local temperature enhancements towards the massive protostellar cores. We show that without properly accounting for the line of sight contamination, the dust temperature is 2-3 K larger than the gas temperature measured with NH3. Conclusions: While both the large-scale kinematics and temperature structure are consistent with that of starless dark filaments, the kinetic gas temperature profile on smaller scales is suggestive of tracing the heating mechanism coincident with the locations of massive protostellar cores. The reduced spectral cubes (FITS format) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A133
The Equations of Oceanic Motions
NASA Astrophysics Data System (ADS)
Müller, Peter
2006-10-01
Modeling and prediction of oceanographic phenomena and climate is based on the integration of dynamic equations. The Equations of Oceanic Motions derives and systematically classifies the most common dynamic equations used in physical oceanography, from large scale thermohaline circulations to those governing small scale motions and turbulence. After establishing the basic dynamical equations that describe all oceanic motions, M|ller then derives approximate equations, emphasizing the assumptions made and physical processes eliminated. He distinguishes between geometric, thermodynamic and dynamic approximations and between the acoustic, gravity, vortical and temperature-salinity modes of motion. Basic concepts and formulae of equilibrium thermodynamics, vector and tensor calculus, curvilinear coordinate systems, and the kinematics of fluid motion and wave propagation are covered in appendices. Providing the basic theoretical background for graduate students and researchers of physical oceanography and climate science, this book will serve as both a comprehensive text and an essential reference.
Biological Physics major as a means to stimulate an undergraduate physics program
NASA Astrophysics Data System (ADS)
Jaeger, Herbert; Eid, Khalid; Yarrison-Rice, Jan
2013-03-01
In an effort to stress the cross-disciplinary nature of modern physics we added a Biological Physics major. Drawing from coursework in physics, biology, chemistry, mathematics, and related disciplines, it combines a broad curriculum with physical and mathematical rigor in preparation for careers in biophysics, medical physics, and biomedical engineering. Biological Physics offers a new path of studies to a large pool of life science students. We hope to grow our physics majors from 70-80 to more than 100 students and boost our graduation rate from the mid-teens to the mid-twenties. The new major brought about a revision of our sophomore curriculum to make room for modern topics without sidelining fundamentals. As a result, we split our 1-semester long Contemporary Physics course (4 cr hrs) into a year-long sequence Contemporary Physics Foundations and Contemporary Physics Frontiers (both 3 cr hrs). Foundations starts with relativity, then focuses on 4 quantum mechanics topics: wells, spin 1/2, oscillators, and hydrogen. Throughout the course applications are woven in whenever the opportunity arises, e.g. magnetism and NMR with spin 1/2. The following semester Frontiers explores scientific principles and technological advances that make quantum science and resulting technologies different from the large scale. Frontiers covers enabling techniques from atomic, molecular, condensed matter, and particle physics, as well as advances in nanotechnology, quantum optics, and biophysics.
Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts
NASA Astrophysics Data System (ADS)
Wang, M.; Kamarianakis, Y.; Georgescu, M.
2017-12-01
A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.
NASA Astrophysics Data System (ADS)
St-Louis, Nicole
2015-08-01
The winds of hot, luminous stars are known to show small but also large scale density structures. Ultimately, these departures from spherical symmetry are important for the understanding of the loss of angular momentum from the star and are crucial in determining its rotation rate. There are many observational signatures of these departures from a uniform and spherically symmetric outflow. This poster will present results from spectroscopic and polarimetric observations of Wolf-Rayet stars, the descendants of massive O stars, that reveal large-scale asymmetries in their winds and discuss what can be learned about the structure of these winds and about the the physical mechanism responsible for generating them. Very little is known about the rotation rates of these small, He-burning stars which are the direct progenitors of at least some supernova explosions. If enough angular momentum is retained in the core, some may also very well be the progenitors of long gamma-ray bursts.
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Seismic and source characteristics of large chemical explosions. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adushkin, V.V.; Kostuchenko, V.N.; Pernik, L.M.
From the very beginning of its arrangement in 1947, the Institute for Dynamics of the Geospheres RAS (former Special Sector of the Institute for physics of the Earth, RAS) was providing scientific observations of effects of nuclear explosions, as well as large-scale detonations of HE, on environment. This report presents principal results of instrumental observations obtained from various large-scale chemical explosions conducted in the Former-Soviet Union in the period of time from 1957 to 1989. Considering principal aim of the work, tamped and equivalent chemical explosions have been selected with total weights from several hundreds to several thousands ton. Inmore » particular, the selected explosions were aimed to study scaling law from excavation explosions, seismic effect of tamped explosions, and for dam construction for hydropower stations and soil melioration. Instrumental data on surface explosions of total weight in the same range aimed to test military technics and special objects are not included.« less
Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.
Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan
2017-08-15
Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Herault, J; Rincon, F; Cossu, C; Lesur, G; Ogilvie, G I; Longaretti, P-Y
2011-09-01
The nature of dynamo action in shear flows prone to magnetohydrodynamc instabilities is investigated using the magnetorotational dynamo in Keplerian shear flow as a prototype problem. Using direct numerical simulations and Newton's method, we compute an exact time-periodic magnetorotational dynamo solution to three-dimensional dissipative incompressible magnetohydrodynamic equations with rotation and shear. We discuss the physical mechanism behind the cycle and show that it results from a combination of linear and nonlinear interactions between a large-scale axisymmetric toroidal magnetic field and nonaxisymmetric perturbations amplified by the magnetorotational instability. We demonstrate that this large-scale dynamo mechanism is overall intrinsically nonlinear and not reducible to the standard mean-field dynamo formalism. Our results therefore provide clear evidence for a generic nonlinear generation mechanism of time-dependent coherent large-scale magnetic fields in shear flows and call for new theoretical dynamo models. These findings may offer important clues to understanding the transitional and statistical properties of subcritical magnetorotational turbulence.
Progress in fast, accurate multi-scale climate simulations
Collins, W. D.; Johansen, H.; Evans, K. J.; ...
2015-06-01
We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less
Land Use, Livelihoods, Vulnerabilities, and Resilience in Coastal Bangladesh
NASA Astrophysics Data System (ADS)
Gilligan, J. M.; Ackerly, B.; Goodbred, S. L., Jr.; Wilson, C.
2014-12-01
The densely populated, low-lying coast of Bangladesh is famously associated with vulnerability to sea-level rise, storms, and flooding. Simultaneously, land-use change has significantly altered local sediment transport, causing elevation loss and degradation of drainage. The rapid growth of shrimp aquaculture has also affected soil chemistry in former agricultural areas and the stock of riverine fisheries through intense larval harvesting. To understand the net impact of these environmental changes on the region's communities, it is necessary to examine interactions across scale - from externally driven large scale environmental change to smaller scale, but often more intense, local change - and also between the physical environment and social, political, and economic conditions. We report on a study of interactions between changing communities and changing environment in coastal Bangladesh, exploring the role of societal and physical factors in shaping the different outcomes and their effects on people's lives. Land reclamation projects in the 1960s surrounded intertidal islands with embankments. This allowed rice farming to expand, but also produced significant elevation loss, which rendered many islands vulnerable to waterlogging and flooding from storm surges. The advent of large-scale shrimp aquaculture added environmental, economic, social, and political stresses, but also brought much export revenue to a developing nation. Locally, attempts to remedy environmental stresses have produced mixed results, with similar measures succeeding in some communities and failing in others. In this context, we find that people are continually adapting to changing opportunities and constraints for food, housing, and income. Niches that support different livelihood activities emerge and dwindle, and their occupants' desires affect the political context. Understanding and successfully responding to the impacts of environmental change requires understanding not only the physical environment, but also the human livelihoods, interpersonal interactions, and human-environmental interactions within a socio-ecological system.
NASA Astrophysics Data System (ADS)
Ukawa, Akira
1998-05-01
The CP-PACS computer is a massively parallel computer consisting of 2048 processing units and having a peak speed of 614 GFLOPS and 128 GByte of main memory. It was developed over the four years from 1992 to 1996 at the Center for Computational Physics, University of Tsukuba, for large-scale numerical simulations in computational physics, especially those of lattice QCD. The CP-PACS computer has been in full operation for physics computations since October 1996. In this article we describe the chronology of the development, the hardware and software characteristics of the computer, and its performance for lattice QCD simulations.
Cosmological consistency tests of gravity theory and cosmic acceleration
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2017-01-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.
On Efficient Multigrid Methods for Materials Processing Flows with Small Particles
NASA Technical Reports Server (NTRS)
Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael
2004-01-01
Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.
Montes-Perez, J; Cruz-Vera, A; Herrera, J N
2011-12-01
This work presents the full analytic expressions for the thermodynamic properties and the static structure factor for a hard sphere plus 1-Yukawa fluid within the mean spherical approximation. To obtain these properties of the fluid type Yukawa analytically it was necessary to solve an equation of fourth order for the scaling parameter on a large scale. The physical root of this equation was determined by imposing physical conditions. The results of this work are obtained from seminal papers of Blum and Høye. We show that is not necessary the use the series expansion to solve the equation for the scaling parameter. We applied our theoretical result to find the thermodynamic and the static structure factor for krypton. Our results are in good agreement with those obtained in an experimental form or by simulation using the Monte Carlo method.
Pangle, Luke A.; DeLong, Stephen B.; Abramson, Nate; Adams, John; Barron-Gafford, Greg A.; Breshears, David D.; Brooks, Paul D.; Chorover, Jon; Dietrich, William E.; Dontsova, Katerina; Durcik, Matej; Espeleta, Javier; Ferré, T.P.A.; Ferriere, Regis; Henderson, Whitney; Hunt, Edward A.; Huxman, Travis E.; Millar, David; Murphy, Brendan; Niu, Guo-Yue; Pavao-Zuckerman, Mitch; Pelletier, Jon D.; Rasmussen, Craig; Ruiz, Joaquin; Saleska, Scott; Schaap, Marcel; Sibayan, Michael; Troch, Peter A.; Tuller, Markus; van Haren, Joost; Zeng, Xubin
2015-01-01
Zero-order drainage basins, and their constituent hillslopes, are the fundamental geomorphic unit comprising much of Earth's uplands. The convergent topography of these landscapes generates spatially variable substrate and moisture content, facilitating biological diversity and influencing how the landscape filters precipitation and sequesters atmospheric carbon dioxide. In light of these significant ecosystem services, refining our understanding of how these functions are affected by landscape evolution, weather variability, and long-term climate change is imperative. In this paper we introduce the Landscape Evolution Observatory (LEO): a large-scale controllable infrastructure consisting of three replicated artificial landscapes (each 330 m2 surface area) within the climate-controlled Biosphere 2 facility in Arizona, USA. At LEO, experimental manipulation of rainfall, air temperature, relative humidity, and wind speed are possible at unprecedented scale. The Landscape Evolution Observatory was designed as a community resource to advance understanding of how topography, physical and chemical properties of soil, and biological communities coevolve, and how this coevolution affects water, carbon, and energy cycles at multiple spatial scales. With well-defined boundary conditions and an extensive network of sensors and samplers, LEO enables an iterative scientific approach that includes numerical model development and virtual experimentation, physical experimentation, data analysis, and model refinement. We plan to engage the broader scientific community through public dissemination of data from LEO, collaborative experimental design, and community-based model development.
Testing The Scale-up Approach To Introductory Astronomy
NASA Astrophysics Data System (ADS)
Kregenow, Julia M.; Keller, L.; Rogers, M.; Romero, D.
2008-09-01
Ithaca College physics department has begun transforming our general education astronomy courses into hands-on, active-learning courses from the previous lecture-based format. We are using the SCALE-UP model (Student Centered Activities for Large Enrollment University Programs) pioneered at North Carolina State University. Expanding on the successes of Studio Physics (developed at RPI), which exchanges traditionally separate lecture/recitation/ laboratory sessions for one dynamic, active-learning environment for approximately 40 students, SCALE-UP extends this model to accommodate 100+ students by using large round tables creating naturally smaller groups of students. Classes meet three times per week with each class blending lecture, hands-on activities, group problem solving, and the use of student polling devices. We are testing whether this mode of teaching astronomy will lead to a better understanding of astronomy and the nature of science. Applying this approach in both the SCALE-UP classroom (90 students) and a traditional lecture classroom (45 students) in spring 2008, we report on our early results and lessons learned after one semester. We also discuss some of our lingering implementation questions and issues, such as: whether to use the same or different instructor in two parallel sections, requiring textbook reading, reading quizzes, on-line homework and activities, how much math to include, development of hands-on activities, and culling the typically overpacked intro astronomy syllabus.
NASA Astrophysics Data System (ADS)
Sotiropoulos, Fotis; Khosronejad, Ali
2016-02-01
Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.
Application of Landscape Mosaic Technology to Complement Coral Reef Resource Mapping and Monitoring
2010-10-01
irregular shapes pose a challenge for divers trying to delimit live tissue boundaries. Future improvements in the 3D representation of benthic mosaics...benthic habitats can be especially challenging when the spatial extent of injuries exceeds tens of square meters. These large injuries are often too...the impacts of severe physical disturbance on coral reefs can be especially challenging when large-scale modifications to the reef structure takes
Amplification of large scale magnetic fields in a decaying MHD system
NASA Astrophysics Data System (ADS)
Park, Kiwan
2017-10-01
Dynamo theory explains the amplification of magnetic fields in the conducting fluids (plasmas) driven by the continuous external energy. It is known that the nonhelical continuous kinetic or magnetic energy amplifies the small scale magnetic field; and the helical energy, the instability, or the shear with rotation effect amplifies the large scale magnetic field. However, recently it was reported that the decaying magnetic energy independent of helicity or instability could generate the large scale magnetic field. This phenomenon may look somewhat contradictory to the conventional dynamo theory. But it gives us some clues to the fundamental mechanism of energy transfer in the magnetized conducting fluids. It also implies that an ephemeral astrophysical event emitting the magnetic and kinetic energy can be a direct cause of the large scale magnetic field observed in space. As of now the exact physical mechanism is not yet understood in spite of several numerical results. The plasma motion coupled with a nearly conserved vector potential in the magnetohydrodynamic (MHD) system may transfer magnetic energy to the large scale. Also the intrinsic property of the scaling invariant MHD equation may decide the direction of energy transfer. In this paper we present the simulation results of inversely transferred helical and nonhelical energy in a decaying MHD system. We introduce a field structure model based on the MHD equation to show that the transfer of magnetic energy is essentially bidirectional depending on the plasma motion and initial energy distribution. And then we derive α coefficient algebraically in line with the field structure model to explain how the large scale magnetic field is induced by the helical energy in the system regardless of an external forcing source. And for the algebraic analysis of nonhelical magnetic energy, we use the eddy damped quasinormalized Markovian approximation to show the inverse transfer of magnetic energy.
Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7
NASA Astrophysics Data System (ADS)
Walker, R.
1984-12-01
The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.
NASA Astrophysics Data System (ADS)
Franci, Luca; Landi, Simone; Verdini, Andrea; Matteini, Lorenzo; Hellinger, Petr
2018-01-01
Properties of the turbulent cascade from fluid to kinetic scales in collisionless plasmas are investigated by means of large-size 3D hybrid (fluid electrons, kinetic protons) particle-in-cell simulations. Initially isotropic Alfvénic fluctuations rapidly develop a strongly anisotropic turbulent cascade, mainly in the direction perpendicular to the ambient magnetic field. The omnidirectional magnetic field spectrum shows a double power-law behavior over almost two decades in wavenumber, with a Kolmogorov-like index at large scales, a spectral break around ion scales, and a steepening at sub-ion scales. Power laws are also observed in the spectra of the ion bulk velocity, density, and electric field, at both magnetohydrodynamic (MHD) and kinetic scales. Despite the complex structure, the omnidirectional spectra of all fields at ion and sub-ion scales are in remarkable quantitative agreement with those of a 2D simulation with similar physical parameters. This provides a partial, a posteriori validation of the 2D approximation at kinetic scales. Conversely, at MHD scales, the spectra of the density and of the velocity (and, consequently, of the electric field) exhibit differences between the 2D and 3D cases. Although they can be partly ascribed to the lower spatial resolution, the main reason is likely the larger importance of compressible effects in the full 3D geometry. Our findings are also in remarkable quantitative agreement with solar wind observations.
Examining Physics Career Interests: Recruitment and Persistence into College
NASA Astrophysics Data System (ADS)
Lock, R. M.; Hazari, Z.; Sadler, P. M.; Sonnert, G.
2012-03-01
Compared to the undergraduate population, the number of students obtaining physics degrees has been declining since the 1960s. This trend continues despite the increasing number of students taking introductory physics courses in high school and college. Our work uses an ex-post facto design to study the factors that influence students' decision to pursue a career in physics at the beginning of college. These factors include high school physics classroom experiences, other science-related experiences, and students' career motivations. The data used in this study is drawn from the Persistence Research in Science and Engineering (PRiSE) Project, a large-scale study that surveyed a nationally representative sample of college/university students enrolled in introductory English courses about their interests and prior experiences in science.
The cosmic ray muon tomography facility based on large scale MRPC detectors
NASA Astrophysics Data System (ADS)
Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping
2015-06-01
Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.
Fundamental tests of galaxy formation theory
NASA Technical Reports Server (NTRS)
Silk, J.
1982-01-01
The structure of the universe as an environment where traces exist of the seed fluctuations from which galaxies formed is studied. The evolution of the density fluctuation modes that led to the eventual formation of matter inhomogeneities is reviewed, How the resulting clumps developed into galaxies and galaxy clusters acquiring characteristic masses, velocity dispersions, and metallicities, is discussed. Tests are described that utilize the large scale structure of the universe, including the dynamics of the local supercluster, the large scale matter distribution, and the anisotropy of the cosmic background radiation, to probe the earliest accessible stages of evolution. Finally, the role of particle physics is described with regard to its observable implications for galaxy formation.
Barrett, Lisa Feldman; Satpute, Ajay
2013-01-01
Understanding how a human brain creates a human mind ultimately depends on mapping psychological categories and concepts to physical measurements of neural response. Although it has long been assumed that emotional, social, and cognitive phenomena are realized in the operations of separate brain regions or brain networks, we demonstrate that it is possible to understand the body of neuroimaging evidence using a framework that relies on domain general, distributed structure-function mappings. We review current research in affective and social neuroscience and argue that the emerging science of large-scale intrinsic brain networks provides a coherent framework for a domain-general functional architecture of the human brain. PMID:23352202
A new multi-scale geomorphological landscape GIS for the Netherlands
NASA Astrophysics Data System (ADS)
Weerts, Henk; Kosian, Menne; Baas, Henk; Smit, Bjorn
2013-04-01
At present, the Cultural Heritage Agency of the Netherlands is developing a nationwide landscape Geographical Information System (GIS). In this new conceptual approach, the Agency puts together several multi-scale landscape classifications in a GIS. The natural physical landscapes lie at the basis of this GIS, because these landscapes provide the natural boundary conditions for anthropogenic. At the local scale a nationwide digital geomorphological GIS is available in the Netherlands. This map, that was originally mapped at 1:50,000 from the late 1970's to the 1990's, is based on geomorphometrical (observable and measurable in the field), geomorphological and, lithological and geochronological criteria. When used at a national scale, the legend of this comprehensive geomorphological map is very complex which hampers use in e.g. planning practice or predictive archaeology. At the national scale several landscape classifications have been in use in the Netherlands since the early 1950's, typically ranging in the order of 10 -15 landscape units for the entire country. A widely used regional predictive archaeological classification has 13 archaeo-landscapes. All these classifications have been defined "top-down" and their actual content and boundaries have only been broadly defined. Thus, these classifications have little or no meaning at a local scale. We have tried to combine the local scale with the national scale. To do so, we first defined national physical geographical regions based on the new 2010 national geological map 1:500,000. We also made sure there was a reference with the European LANMAP2 classification. We arrived at 20 landscape units at the national scale, based on (1) genesis, (2) large-scale geomorphology, (3) lithology of the shallow sub-surface and (4) age. These criteria that were chosen because the genesis of the landscape largely determines its (scale of) morphology and lithology that in turn determine hydrological conditions. All together, they define the natural boundary conditions for anthropogenic use. All units have been defined, mapped and described based on these criteria. This enables the link with the European LANMAP2 GIS. The unit "Till-plateau sand region" for instance runs deep into Germany and even Poland. At the local scale, the boundaries of the national units can be defined and precisely mapped by linking them to the 1:50,000 geomorphological map polygons. Each national unit consists of a typical assemblage of local geomorphological units. So, the newly developed natural physical landscape map layer can be used from the local to the European scale.
Biomimetic Phases of Microtubule-Motor Mixtures
NASA Astrophysics Data System (ADS)
Ross, Jennifer
2014-03-01
We try to determine the universal principles of organization from the molecular scale that gives rise to architecture on the cellular scale. We are specifically interested in the organization of the microtubule cytoskeleton, a rigid, yet versatile network in most cell types. Microtubules in the cell are organized by motor proteins and crosslinkers. This work applies the ideas of statistical mechanics and condensed matter physics to the non-equilibrium pattern formation behind intracellular organization using the microtubule cytoskeleton as the building blocks. We examine these processes in a bottom-up manner by adding increasingly complex protein actors into the system. Our systematic experiments expose nature's laws for organization and has large impacts on biology as well as illuminating new frontiers of non-equilibrium physics.
Numerical simulation of the formation of a spiral galaxy
NASA Astrophysics Data System (ADS)
Williams, P. R.; Nelson, A. H.
2001-08-01
A simulation is described in which the numerical galaxy formed compares favourably in every measurable respect with contemporary bright spiral galaxies, including the formation of a distinct stellar bulge and large scale spiral arm shocks in the gas component. This is achieved in spite of the fact that only idealized proto-galactic initial conditions were used, and only simple phenomenological prescriptions for the physics of the interstellar medium (ISM) and star formation were implemented. In light of the emphasis in recent literature on the importance of the link between galaxy formation and models of the universe on cosmological scales, on the details of the physics of the ISM and star formation, and on apparent problems therein, the implications of this result are discussed.
Identifying Country-Specific Cultures of Physics Education: A Differential Item Functioning Approach
ERIC Educational Resources Information Center
Mesic, Vanes
2012-01-01
In international large-scale assessments of educational outcomes, student achievement is often represented by unidimensional constructs. This approach allows for drawing general conclusions about country rankings with respect to the given achievement measure, but it typically does not provide specific diagnostic information which is necessary for…
Dynamic effects of biochar concentration and particle size on hydraulic properties of sand
USDA-ARS?s Scientific Manuscript database
Large-scale application of biochar has been promoted as a strategy for reclaiming degraded soils and conserving natural landscapes because of biochar potentials to alter the soil biogeochemical and physical properties and improve soil quality. Several studies have reported that biochar amendment at ...
Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development
ERIC Educational Resources Information Center
Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.
2010-01-01
This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…
Control of rabbit myxomatosis in Poland.
Górski, J; Mizak, B; Chrobocińska, M
1994-09-01
The authors present an epizootiological analysis of myxomatosis in Poland. The biological, physical and chemical properties of virus strains used for the production and control of 'Myxovac M' vaccine are discussed. The long-term stability, safety and efficacy of the vaccine are demonstrated. Laboratory experiments were confirmed in large-scale field observations.
Nonvolatile Resistive Switching and Physical Mechanism in LaCrO3 Thin Films
NASA Astrophysics Data System (ADS)
Hu, Wan-Jing; Hu, Ling; Wei, Ren-Huai; Tang, Xian-Wu; Song, Wen-Hai; Dai, Jian-Ming; Zhu, Xue-Bin; Sun, Yu-Ping
2018-04-01
Not Available Supported by the Joint Funds of the National Natural Science Foundation of China and the Chinese Academy of Sciences’ Large-Scale Scientific Facility under Grant No U1532149, and the National Basic Research Program of China under Grant No 2014CB931704.
Evidence of Ubiquitous Large-Amplitude Alfven waves in the Global Field-Aligned Current System
NASA Astrophysics Data System (ADS)
Pakhotin, I.; Mann, I.; Lysak, R. L.; Knudsen, D. J.; Burchill, J. K.; Gjerloev, J. W.; Rae, J.; Forsyth, C.; Murphy, K. R.; Miles, D.; Ozeke, L.; Balasis, G.
2017-12-01
Large-amplitude non-stationarities have been observed during an analysis of a quiescent field-aligned current system crossing using the multi-satellite Swarm constellation. Using simultaneous electric and magnetic field measurements it has been determined that these non-stationarities, reaching tens to hundreds of nanoteslas, are Alfvenic in nature. Evidence suggests that these large-amplitude Alfven waves are a ubiquitous, fundamentally inherent feature of and exist in a continuum with larger-scale field-aligned currents, and both can be explained using the same physical paradigm of reflected Alfven waves.
The Panchromatic Comparative Exoplanetary Treasury Program
NASA Astrophysics Data System (ADS)
Sing, David
2016-10-01
HST has played the definitive role in the characterization of exoplanets and from the first planets available, we have learned that their atmospheres are incredibly diverse. The large number of transiting planets now available has prompted a new era of atmospheric studies, where wide scale comparative planetology is now possible. The atmospheric chemistry of cloud/haze formation and atmospheric mass-loss are a major outstanding issues in the field of exoplanets, and we seek to make progress gaining insight into their underlying physical process through comparative studies. Here we propose to use Hubble's full spectroscopic capabilities to produce the first large-scale, simultaneous UVOIR comparative study of exoplanets. With full wavelength coverage, an entire planet's atmosphere can be probed simultaneously and with sufficient numbers of planets, we can statistically compare their features with physical parameters for the first time. This panchromatic program will build a lasting HST legacy, providing the UV and blue-optical spectra unavailable to JWST. From these observations, chemistry over a wide range of physical environments will be probed, from the hottest condensates to much cooler planets where photochemical hazes could be present. Constraints on aerosol size and composition will help unlock our understanding of clouds and how they are suspended at such high altitudes. Notably, there have been no large transiting UV HST programs, and this panchromatic program will provide a fundamental legacy contribution to atmospheric escape of small exoplanets, where the mass loss can be significant and have a major impact on the evolution of the planet itself.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
NASA Astrophysics Data System (ADS)
Thornton, Ronald
2010-10-01
Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). An active learning environment is often difficult to achieve in lecture sessions. This presentation will demonstrate the use of sequences of Interactive Lecture Demonstrations (ILDs) that use real experiments often involving real-time data collection and display combined with student interaction to create an active learning environment in large or small lecture classes. Interactive lecture demonstrations will be done in the area of mechanics using real-time motion probes and the Visualizer. A video tape of students involved in interactive lecture demonstrations will be shown. The results of a number of research studies at various institutions (including international) to measure the effectiveness of ILDs and guided inquiry conceptual laboratories will be presented.
Shock propagation in locally driven granular systems
NASA Astrophysics Data System (ADS)
Joy, Jilmy P.; Pathak, Sudhir N.; Das, Dibyendu; Rajesh, R.
2017-09-01
We study shock propagation in a system of initially stationary hard spheres that is driven by a continuous injection of particles at the origin. The disturbance created by the injection of energy spreads radially outward through collisions between particles. Using scaling arguments, we determine the exponent characterizing the power-law growth of this disturbance in all dimensions. The scaling functions describing the various physical quantities are determined using large-scale event-driven simulations in two and three dimensions for both elastic and inelastic systems. The results are shown to describe well the data from two different experiments on granular systems that are similarly driven.
Shock propagation in locally driven granular systems.
Joy, Jilmy P; Pathak, Sudhir N; Das, Dibyendu; Rajesh, R
2017-09-01
We study shock propagation in a system of initially stationary hard spheres that is driven by a continuous injection of particles at the origin. The disturbance created by the injection of energy spreads radially outward through collisions between particles. Using scaling arguments, we determine the exponent characterizing the power-law growth of this disturbance in all dimensions. The scaling functions describing the various physical quantities are determined using large-scale event-driven simulations in two and three dimensions for both elastic and inelastic systems. The results are shown to describe well the data from two different experiments on granular systems that are similarly driven.
Foundational perspectives on causality in large-scale brain networks
NASA Astrophysics Data System (ADS)
Mannino, Michael; Bressler, Steven L.
2015-12-01
A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.
Foundational perspectives on causality in large-scale brain networks.
Mannino, Michael; Bressler, Steven L
2015-12-01
A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mashood, K. K.; Singh, Vijay A.
2013-09-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in highly competitive problem-solving examinations was studied using a massive database. The sample sizes ranged from hundreds to a few hundred thousand. Encouraged by the presence of significant correlations, we interviewed 20 students to explore the pedagogic potential of physics in imparting transferable problem-solving skills. We report strategies and practices relevant to physics employed by these students which foster transfer.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Hyman, Jeffrey De'Haven; Painter, S. L.; Viswanathan, H.; ...
2015-09-12
We investigate how the choice of injection mode impacts transport properties in kilometer-scale three-dimensional discrete fracture networks (DFN). The choice of injection mode, resident and flux-weighted, is designed to mimic different physical phenomena. It has been hypothesized that solute plumes injected under resident conditions evolve to behave similarly to solutes injected under flux-weighted conditions. Previously, computational limitations have prohibited the large-scale simulations required to investigate this hypothesis. We investigate this hypothesis by using a high-performance DFN suite, dfnWorks, to simulate flow in kilometer-scale three-dimensional DFNs based on fractured granite at the Forsmark site in Sweden, and adopt a Lagrangian approachmore » to simulate transport therein. Results show that after traveling through a pre-equilibrium region, both injection methods exhibit linear scaling of the first moment of travel time and power law scaling of the breakthrough curve with similar exponents, slightly larger than 2. Lastly, the physical mechanisms behind this evolution appear to be the combination of in-network channeling of mass into larger fractures, which offer reduced resistance to flow, and in-fracture channeling, which results from the topology of the DFN.« less
Naturalness of Electroweak Symmetry Breaking
NASA Astrophysics Data System (ADS)
Espinosa, J. R.
2007-02-01
After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the fine tuning problem of electroweak symmetry breaking in two main scenarios beyond the Standard Model: SUSY and Little Higgs models. The main conclusions are that New Physics should appear on the reach of the LHC; that some SUSY models can solve the hierarchy problem with acceptable residual fine tuning and, finally, that Little Higgs models generically suffer from large tunings, many times hidden.
Physical models of polarization mode dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menyuk, C.R.; Wai, P.K.A.
The effect of randomly varying birefringence on light propagation in optical fibers is studied theoretically in the parameter regime that will be used for long-distance communications. In this regime, the birefringence is large and varies very rapidly in comparison to the nonlinear and dispersive scale lengths. We determine the polarization mode dispersion, and we show that physically realistic models yield the same result for polarization mode dispersion as earlier heuristic models that were introduced by Poole. We also prove an ergodic theorem.
Perturbation theory for cosmologies with nonlinear structure
NASA Astrophysics Data System (ADS)
Goldberg, Sophia R.; Gallagher, Christopher S.; Clifton, Timothy
2017-11-01
The next generation of cosmological surveys will operate over unprecedented scales, and will therefore provide exciting new opportunities for testing general relativity. The standard method for modelling the structures that these surveys will observe is to use cosmological perturbation theory for linear structures on horizon-sized scales, and Newtonian gravity for nonlinear structures on much smaller scales. We propose a two-parameter formalism that generalizes this approach, thereby allowing interactions between large and small scales to be studied in a self-consistent and well-defined way. This uses both post-Newtonian gravity and cosmological perturbation theory, and can be used to model realistic cosmological scenarios including matter, radiation and a cosmological constant. We find that the resulting field equations can be written as a hierarchical set of perturbation equations. At leading-order, these equations allow us to recover a standard set of Friedmann equations, as well as a Newton-Poisson equation for the inhomogeneous part of the Newtonian energy density in an expanding background. For the perturbations in the large-scale cosmology, however, we find that the field equations are sourced by both nonlinear and mode-mixing terms, due to the existence of small-scale structures. These extra terms should be expected to give rise to new gravitational effects, through the mixing of gravitational modes on small and large scales—effects that are beyond the scope of standard linear cosmological perturbation theory. We expect our formalism to be useful for accurately modeling gravitational physics in universes that contain nonlinear structures, and for investigating the effects of nonlinear gravity in the era of ultra-large-scale surveys.
Tian, Yang; Liu, Zhilin; Li, Xiaoqian; Zhang, Lihua; Li, Ruiqing; Jiang, Ripeng; Dong, Fang
2018-05-01
Ultrasonic sonotrodes play an essential role in transmitting power ultrasound into the large-scale metallic casting. However, cavitation erosion considerably impairs the in-service performance of ultrasonic sonotrodes, leading to marginal microstructural refinement. In this work, the cavitation erosion behaviour of ultrasonic sonotrodes in large-scale castings was explored using the industry-level experiments of Al alloy cylindrical ingots (i.e. 630 mm in diameter and 6000 mm in length). When introducing power ultrasound, severe cavitation erosion was found to reproducibly occur at some specific positions on ultrasonic sonotrodes. However, there is no cavitation erosion present on the ultrasonic sonotrodes that were not driven by electric generator. Vibratory examination showed cavitation erosion depended on the vibration state of ultrasonic sonotrodes. Moreover, a finite element (FE) model was developed to simulate the evolution and distribution of acoustic pressure in 3-D solidification volume. FE simulation results confirmed that significant dynamic interaction between sonotrodes and melts only happened at some specific positions corresponding to severe cavitation erosion. This work will allow for developing more advanced ultrasonic sonotrodes with better cavitation erosion-resistance, in particular for large-scale castings, from the perspectives of ultrasonic physics and mechanical design. Copyright © 2018 Elsevier B.V. All rights reserved.
Leaf optical properties shed light on foliar trait variability at individual to global scales
NASA Astrophysics Data System (ADS)
Shiklomanov, A. N.; Serbin, S.; Dietze, M.
2016-12-01
Recent syntheses of large trait databases have contributed immensely to our understanding of drivers of plant function at the global scale. However, the global trade-offs revealed by such syntheses, such as the trade-off between leaf productivity and resilience (i.e. "leaf economics spectrum"), are often absent at smaller scales and fail to correlate with actual functional limitations. An improved understanding of how traits vary within communities, species, and individuals is critical to accurate representations of vegetation ecophysiology and ecological dynamics in ecosystem models. Spectral data from both field observations and remote sensing platforms present a potentially rich and widely available source of information on plant traits. In particular, the inversion of physically-based radiative transfer models (RTMs) is an effective and general method for estimating plant traits from spectral measurements. Here, we apply Bayesian inversion of the PROSPECT leaf RTM to a large database of field spectra and plant traits spanning tropical, temperate, and boreal forests, agricultural plots, arid shrublands, and tundra to identify dominant sources of variability and characterize trade-offs in plant functional traits. By leveraging such a large and diverse dataset, we re-calibrate the empirical absorption coefficients underlying the PROSPECT model and expand its scope to include additional leaf biochemical components, namely leaf nitrogen content. Our work provides a key methodological contribution as a physically-based retrieval of leaf nitrogen from remote sensing observations, and provides substantial insights about trait trade-offs related to plant acclimation, adaptation, and community assembly.
Resilience, self-esteem and self-compassion in adults with spina bifida.
Hayter, M R; Dorstyn, D S
2014-02-01
Cross-sectional survey. To examine factors that may enhance and promote resilience in adults with spina bifida. Community-based disability organisations within Australia. Ninety-seven adults with a diagnosis of spina bifida (SB) completed a survey comprising of demographic questions in addition to standardised self-report measures of physical functioning (Craig Handicap Assessment and Reporting Technique), resilience (Connor-Davidson Resilience Scale, 10 item), self-esteem (Rosenberg Self-esteem Scale), self-compassion (Self-compassion Scale) and psychological distress (Depression Anxiety Stress Scales, 21 item). The majority (66%) of respondents reported moderate to high resilience. Physical disability impacted on coping, with greater CD-RISC 10 scores reported by individuals who were functionally independent in addition to those who experienced less medical co-morbidities. Significant correlations between resilience and psychological traits (self-esteem r=0.36, P<0.01; self-compassion r=0.40, P<0.01) were also noted. However, the combined contribution of these variables only accounted for 23% of the total variance in resilience scores (R(2)=0.227, F(5,94)=5.23, P<0.01). These findings extend current understanding of the concept of resilience in adults with a congenital physical disability. The suggestion is that resilience involves a complex interplay between physical determinants of health and psychological characteristics, such as self-esteem and self-compassion. It follows that cognitive behavioural strategies with a focus on self-management may, in part, contribute to the process of resilience in this group. Further large-scale and longitudinal research will help to confirm these findings.
Validation of the Physical Activity Scale for individuals with physical disabilities.
van den Berg-Emons, Rita J; L'Ortye, Annemiek A; Buffart, Laurien M; Nieuwenhuijsen, Channah; Nooijen, Carla F; Bergen, Michael P; Stam, Henk J; Bussmann, Johannes B
2011-06-01
To determine the criterion validity of the Physical Activity Scale for Individuals With Physical Disabilities (PASIPD) by means of daily physical activity levels measured by using a validated accelerometry-based activity monitor in a large group of persons with a physical disability. Cross-sectional. Participants' home environment. Ambulatory and nonambulatory persons with cerebral palsy, meningomyelocele, or spinal cord injury (N=124). Not applicable. Self-reported physical activity level measured by using the PASIPD, a 2-day recall questionnaire, was correlated to objectively measured physical activity level measured by using a validated accelerometry-based activity monitor. Significant Spearman correlation coefficients between the PASIPD and activity monitor outcome measures ranged from .22 to .37. The PASIPD overestimated the duration of physical activity measured by using the activity monitor (mean ± SD, 3.9±2.9 vs 1.5±0.9h/d; P<.01). Significant correlation (ρ=-.74; P<.01) was found between average number of hours of physical activity per day measured by using the 2 methods and difference in hours between methods. This indicates larger overestimation for persons with higher activity levels. The PASIPD correlated poorly with objective measurements using an accelerometry-based activity monitor in people with a physical disability. However, similar low correlations between objective and subjective activity measurements have been found in the general population. Users of the PASIPD should be cautious about overestimating physical activity levels. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
An experimental method to verify soil conservation by check dams on the Loess Plateau, China.
Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q
2009-12-01
A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.
Underground atom gradiometer array for mass distribution monitoring and advanced geodesy
NASA Astrophysics Data System (ADS)
Canuel, B.
2015-12-01
After more than 20 years of fundamental research, atom interferometers have reached sensitivity and accuracy levels competing with or beating inertial sensors based on different technologies. Atom interferometers offer interesting applications in geophysics (gravimetry, gradiometry, Earth rotation rate measurements), inertial sensing (submarine or aircraft autonomous positioning), metrology (new definition of the kilogram) and fundamental physics (tests of the standard model, tests of general relativity). Atom interferometers already contributed significantly to fundamental physics by, for example, providing stringent constraints on quantum-electrodynamics through measurements of the hyperfine structure constant, testing the Equivalence Principle with cold atoms, or providing new measurements for the Newtonian gravitational constant. Cold atom sensors have moreover been established as key instruments in metrology for the new definition of the kilogram or through international comparisons of gravimeters. The field of atom interferometry (AI) is now entering a new phase where very high sensitivity levels must be demonstrated, in order to enlarge the potential applications outside atomic physics laboratories. These applications range from gravitational wave (GW) detection in the [0.1-10 Hz] frequency band to next generation ground and space-based Earth gravity field studies to precision gyroscopes and accelerometers. The Matter-wave laser Interferometric Gravitation Antenna (MIGA) presented here is a large-scale matter-wave sensor which will open new applications in geoscience and fundamental physics. The MIGA consortium gathers 18 expert French laboratories and companies in atomic physics, metrology, optics, geosciences and gravitational physics, with the aim to build a large-scale underground atom-interferometer instrument by 2018 and operate it till at least 2023. In this paper, we present the main objectives of the project, the status of the construction of the instrument and the motivation for the applications of MIGA in geosciences
Physical and chemical controls on ore shoots - insights from 3D modeling of an orogenic gold deposit
NASA Astrophysics Data System (ADS)
Vollgger, S. A.; Tomkins, A. G.; Micklethwaite, S.; Cruden, A. R.; Wilson, C. J. L.
2016-12-01
Many ore deposits have irregular grade distributions with localized elongate and well-mineralized rock volumes commonly referred to as ore shoots. The chemical and physical processes that control ore shoot formation are rarely understood, although transient episodes of elevated permeability are thought to be important within the brittle and brittle-ductile crust, due to faulting and fracturing associated with earthquake-aftershock sequences or earthquake swarms. We present data from an orogenic gold deposit in Australia where the bulk of the gold is contained in abundant fine arsenopyrite crystals associated with a fault-vein network within tight upright folds. The deposit-scale fault network is connected to a deeper network of thrust faults (tens of kilometers long). Using 3D implicit modeling of geochemical data, based on radial basis functions, gold grades and gold-arsenic element ratios were interpolated and related to major faults, vein networks and late intrusions. Additionally, downhole bedding measurements were used to model first order (mine-scale) fold structures. The results show that ore shoot plunges are not parallel with mine-scale or regional fold plunges, and that bedding parallel faults related to flexural slip folding play a pivotal role on ore shoot attitudes. 3D fault slip and dilation tendency analysis indicate that fault reactivation and formation of linking faults are associated with large volumes of high-grade ore. We suggest slip events on the large-scale thrust network allowed mineralizing fluids to rapidly migrate over large distances and become supersaturated in elements such as gold, promoting widespread precipitation and high nucleation densities of arsenopyrite upon fluid-rock interaction at trap sites within the deposit.
NASA Astrophysics Data System (ADS)
Hänninen, Jari; Vuorinen, Ilppo; Rajasilta, Marjut; Reid, Philip C.
2015-11-01
Selected Baltic Sea watershed River Runoff (BSRR) events during 1970-2000 were used as predictor in Generalised Linear Mixed Models (GLIMMIX) for evidence of simultaneous changes/chain of events (including possible time lags) in some chemical, physical and biological variables in the Baltic and North Sea ecosystems. Our aim was to explore for climatic-based explanation for ecological regime shifts that were documented semi-simultaneously in both ecosystems. Certain similarities were identified in the North Sea and the Baltic Sea salinity, oxygen concentration, temperature and phyto- and zooplankton parameters. These findings suggest that BSRR events which originate in the Baltic Sea catchment area modify and contribute to large scale ecosystem changes not only in the Baltic Sea, but also in the adjacent parts of the North Sea. However, the Baltic Sea inter-annual and inter-decadal variabilities of physical and biological parameters are driven by direct atmospheric forcing, typically with a relatively short lag. In contrast, such changes in the North Sea are influenced by both local and direct atmospheric forcing, typically with a longer lag than in the Baltic, and a more regional, indirect forcing from changes in the North Atlantic. We suggest that this interactive system partially is behind large scale ecosystem regime shifts found in both Seas. During our study period two such shifts have been identified independently from us in a study earlier in the Southern and Central Baltic in 1980s and 1990s and a later one in 2001/2002 in the North Sea. As a post hoc test we compared the 0+ year class strength of the North Sea herring with BSRR intensity, and found evidence for higher herring production in high BSRR periods, which further corroborates the idea of a remote effect from the large watershed area of the Baltic. Regime shifts as well as their semi-synchronous appearance in two neighbouring sea areas could be identified. GLIMMIX models provide opportunities for determining and understanding the mechanisms behind marine ecosystem long-term and large-scale changes. Many studies have shown the importance of climatic factors (identified by the air pressure index, North Atlantic Oscillation) to the physical and biological changes over the North Atlantic. Our study enlarges the areal and temporal scope of these observations, and provides further support and explanation for climate as the pacemaker for marine ecological changes.
Overview of the SHIELDS Project at LANL
NASA Astrophysics Data System (ADS)
Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, D.; Vernon, L.; Woodroffe, J. R.; Toth, G.; Welling, D. T.; Yu, Y.; Birn, J.; Thomsen, M. F.; Borovsky, J.; Denton, M.; Albert, J.; Horne, R. B.; Lemon, C. L.; Markidis, S.; Young, S. L.
2015-12-01
The near-Earth space environment is a highly dynamic and coupled system through a complex set of physical processes over a large range of scales, which responds nonlinearly to driving by the time-varying solar wind. Predicting variations in this environment that can affect technologies in space and on Earth, i.e. "space weather", remains a big space physics challenge. We present a recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program that is developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to specify the dynamics of the hot (keV) particles (the seed population for the radiation belts) on both macro- and micro-scale, including important physics of rapid particle injection and acceleration associated with magnetospheric storms/substorms and plasma waves. This challenging problem is addressed using a team of world-class experts in the fields of space science and computational plasma physics and state-of-the-art models and computational facilities. New data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed in addition to physics-based models. This research will provide a framework for understanding of key radiation belt drivers that may accelerate particles to relativistic energies and lead to spacecraft damage and failure. The ability to reliably distinguish between various modes of failure is critically important in anomaly resolution and forensics. SHIELDS will enhance our capability to accurately specify and predict the near-Earth space environment where operational satellites reside.
2016-06-01
zones with ice concentrations up to 40%. To achieve this goal, the Navy must determine safe operational speeds as a function of ice concen- tration...and full-scale experience with ice-capable hull forms that have shallow entry angles to promote flexural ice failure preferentially over crushing...plan view) of the proposed large-scale ice–hull impact experiment to be conducted in CRREL’s refrigerated towing basin. Shown here is a side-panel
NASA Astrophysics Data System (ADS)
Riechers, Dominik A.; Bolatto, Alberto D.; Carilli, Chris; Casey, Caitlin M.; Decarli, Roberto; Murphy, Eric Joseph; Narayanan, Desika; Walter, Fabian; ngVLA Galaxy Assembly through Cosmic Time Science Working Group, ngVLA Galaxy Ecosystems Science Working Group
2018-01-01
The Next Generation Very Large Array (ngVLA) will fundamentally advance our understanding of the formation processes that lead to the assembly of galaxies throughout cosmic history. The combination of large bandwidth with unprecedented sensitivity to the critical low-level CO lines over virtually the entire redshift range will open up the opportunity to conduct large-scale, deep cold molecular gas surveys, mapping the fuel for star formation in galaxies over substantial cosmic volumes. Imaging of the sub-kiloparsec scale distribution and kinematic structure of molecular gas in both normal main-sequence galaxies and large starbursts back to early cosmic epochs will reveal the physical processes responsible for star formation and black hole growth in galaxies over a broad range in redshifts. In the nearby universe, the ngVLA has the capability to survey the structure of the cold, star-forming interstellar medium at parsec-resolution out to the Virgo cluster. A range of molecular tracers will be accessible to map the motion, distribution, and physical and chemical state of the gas as it flows in from the outer disk, assembles into clouds, and experiences feedback due to star formation or accretion into central super-massive black holes. These investigations will crucially complement studies of the star formation and stellar mass histories with the Large UV/Optical/Infrared Surveyor and the Origins Space Telescope, providing the means to obtain a comprehensive picture of galaxy evolution through cosmic times.
Effectiveness of a Scaled-Up Arthritis Self-Management Program in Oregon: Walk With Ease.
Conte, Kathleen P; Odden, Michelle C; Linton, Natalie M; Harvey, S Marie
2016-12-01
To evaluate the effectiveness of Walk With Ease (WWE), an evidence-based arthritis self-management program that was scaled up in Oregon in 2012 to 2014. Guided by the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) framework, we collected participant surveys and attendance records and conducted observations. Preprogram and postprogram, participants self-reported pain and fatigue (scale: 0-10 points; high scores indicate more pain and fatigue) and estimated episodes of physical activity per week in the last month. Recruitment successfully reached the targeted population-sedentary adults with arthritis (n = 598). Participants reported significant reduction in pain (-0.47 points; P = .006) and fatigue (-0.58 points; P = .021) and increased physical activity (0.86 days/week; P < .001). WWE was adopted by workplaces and medical, community, faith, and retirement centers. Most WWE programs were delivered with high fidelity; average attendance was 47%. WWE is suitable for implementation by diverse organizations. Effect sizes for pain and fatigue were less than those in the original WWE studies, but this is to be expected for a large-scale implementation. Public Health Implications. WWE can be effectively translated to diverse, real-world contexts to help sedentary adults increase physical activity and reduce pain and fatigue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram
This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. Themore » macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.« less
Optimization and Scale-up of Inulin Extraction from Taraxacum kok-saghyz roots.
Hahn, Thomas; Klemm, Andrea; Ziesse, Patrick; Harms, Karsten; Wach, Wolfgang; Rupp, Steffen; Hirth, Thomas; Zibek, Susanne
2016-05-01
The optimization and scale-up of inulin extraction from Taraxacum kok-saghyz Rodin was successfully performed. Evaluating solubility investigations, the extraction temperature was fixed at 85 degrees C. The inulin stability regarding degradation or hydrolysis could be confirmed by extraction in the presence of model inulin. Confirming stability at the given conditions the isolation procedure was transferred from a 1 L- to a 1 m3-reactor. The Reynolds number was selected as the relevant dimensionless number that has to remain constant in both scales. The stirrer speed in the large scale was adjusted to 3.25 rpm regarding a 300 rpm stirrer speed in the 1 L-scale and relevant physical and process engineering parameters. Assumptions were confirmed by approximately homologous extraction kinetics in both scales. Since T. kok-saghyz is in the focus of research due to its rubber content side-product isolation from residual biomass it is of great economic interest. Inulin is one of these additional side-products that can be isolated in high quantity (- 35% of dry mass) and with a high average degree of polymerization (15.5) in large scale with a purity of 77%.
From Physics to industry: EOS outside HEP
NASA Astrophysics Data System (ADS)
Espinal, X.; Lamanna, M.
2017-10-01
In the competitive market for large-scale storage solutions the current main disk storage system at CERN EOS has been showing its excellence in the multi-Petabyte high-concurrency regime. It has also shown a disruptive potential in powering the service in providing sync and share capabilities and in supporting innovative analysis environments along the storage of LHC data. EOS has also generated interest as generic storage solution ranging from university systems to very large installations for non-HEP applications.
2010-04-01
Factors of Child Abuse in A Large Survey Sample. International FamilyViolence and Child Victimization Research Conference. Portsmouth, New...manuscript in preparation). Physical child abuse in a large-scale survey of the U.S. Air Force: Risk and promotive factors. Slep, A. M. S., Snarr, J...D., Heyman, R. E., & Foran, H. M. (manuscript in preparation). Risk and promotive factors for emotional child abuse among active duty U.S. Air
Biochemical analysis of force-sensitive responses using a large-scale cell stretch device.
Renner, Derrick J; Ewald, Makena L; Kim, Timothy; Yamada, Soichiro
2017-09-03
Physical force has emerged as a key regulator of tissue homeostasis, and plays an important role in embryogenesis, tissue regeneration, and disease progression. Currently, the details of protein interactions under elevated physical stress are largely missing, therefore, preventing the fundamental, molecular understanding of mechano-transduction. This is in part due to the difficulty isolating large quantities of cell lysates exposed to force-bearing conditions for biochemical analysis. We designed a simple, easy-to-fabricate, large-scale cell stretch device for the analysis of force-sensitive cell responses. Using proximal biotinylation (BioID) analysis or phospho-specific antibodies, we detected force-sensitive biochemical changes in cells exposed to prolonged cyclic substrate stretch. For example, using promiscuous biotin ligase BirA* tagged α-catenin, the biotinylation of myosin IIA increased with stretch, suggesting the close proximity of myosin IIA to α-catenin under a force bearing condition. Furthermore, using phospho-specific antibodies, Akt phosphorylation was reduced upon stretch while Src phosphorylation was unchanged. Interestingly, phosphorylation of GSK3β, a downstream effector of Akt pathway, was also reduced with stretch, while the phosphorylation of other Akt effectors was unchanged. These data suggest that the Akt-GSK3β pathway is force-sensitive. This simple cell stretch device enables biochemical analysis of force-sensitive responses and has potential to uncover molecules underlying mechano-transduction.
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Shie, C.-L.; Simpson, J.
2000-01-01
In general, there are two broad scientific objectives when using cloud resolving models (CRMs or cloud ensemble models-CEMs) to study tropical convection. The first one is to use them as a physics resolving models to understand the dynamic and microphysical processes associated with the tropical water and energy cycles and their role in the climate system. The second approach is to use the CRMs to improve the representation of moist processes and their interaction with radiation in large-scale models. In order to improve the credibility of the CRMs and achieve the above goals, CRMs using identical initial conditions and large-scale influences need to produce very similar results. Two CRMs produced different statistical equilibrium (SE) states even though both used the same initial thermodynamic and wind conditions. Sensitivity tests to identify the major physical processes that determine the SE states for the different CRM simulations were performed. Their results indicated that atmospheric horizontal wind is treated quite differently in these two CRMs. The model that had stronger surface winds and consequently larger latent and sensible heat fluxes from the ocean produced a warmer and more humid modeled thermodynamic SE state. In addition, the domain mean thermodynamic state is more unstable for those experiments that produced a warmer and more humid SE state. Their simulated wet (warm and humid) SE states are thermally more stable in the lower troposphere (from the surface to 4-5 km in altitude). The large-scale horizontal advective effects on temperature and water vapor mixing ratio are needed when using CRMs to perform long-term integrations to study convective feedback under specified large-scale environments. In addition, it is suggested that the dry and cold SE state simulated was caused by enhanced precipitation but not enough surface evaporation. We find some problems with the interpretation of these three phenomena.
Singh, Nadia D.; Aquadro, Charles F.; Clark, Andrew G.
2009-01-01
Accurate assessment of local recombination rate variation is crucial for understanding the recombination process and for determining the impact of natural selection on linked sites. In Drosophila, local recombination intensity has been estimated primarily by statistical approaches, estimating the local slope of the relationship between the physical and genetic maps. However, these estimates are limited in resolution, and as a result, the physical scale at which recombination intensity varies in Drosophila is largely unknown. While there is some evidence suggesting as much as a 40-fold variation in crossover rate at a local scale in D. pseudoobscura, little is known about the fine-scale structure of recombination rate variation in D. melanogaster. Here, we experimentally examine the fine-scale distribution of crossover events in a 1.2 Mb region on the D. melanogaster X chromosome using a classic genetic mapping approach. Our results show that crossover frequency is significantly heterogeneous within this region, varying ~ 3.5 fold. Simulations suggest that this degree of heterogeneity is sufficient to affect levels of standing nucleotide diversity, although the magnitude of this effect is small. We recover no statistical association between empirical estimates of nucleotide diversity and recombination intensity, which is likely due to the limited number of loci sampled in our population genetic dataset. However, codon bias is significantly negatively correlated with fine-scale recombination intensity estimates, as expected. Our results shed light on the relevant physical scale to consider in evolutionary analyses relating to recombination rate, and highlight the motivations to increase the resolution of the recombination map in Drosophila. PMID:19504037
Can We Use Single-Column Models for Understanding the Boundary Layer Cloud-Climate Feedback?
NASA Astrophysics Data System (ADS)
Dal Gesso, S.; Neggers, R. A. J.
2018-02-01
This study explores how to drive Single-Column Models (SCMs) with existing data sets of General Circulation Model (GCM) outputs, with the aim of studying the boundary layer cloud response to climate change in the marine subtropical trade wind regime. The EC-EARTH SCM is driven with the large-scale tendencies and boundary conditions as derived from two different data sets, consisting of high-frequency outputs of GCM simulations. SCM simulations are performed near Barbados Cloud Observatory in the dry season (January-April), when fair-weather cumulus is the dominant low-cloud regime. This climate regime is characterized by a near equilibrium in the free troposphere between the long-wave radiative cooling and the large-scale advection of warm air. In the SCM, this equilibrium is ensured by scaling the monthly mean dynamical tendency of temperature and humidity such that it balances that of the model physics in the free troposphere. In this setup, the high-frequency variability in the forcing is maintained, and the boundary layer physics acts freely. This technique yields representative cloud amount and structure in the SCM for the current climate. Furthermore, the cloud response to a sea surface warming of 4 K as produced by the SCM is consistent with that of the forcing GCM.
Climate and smoke: an appraisal of nuclear winter.
Turco, R P; Toon, O B; Ackerman, T P; Pollack, J B; Sagan, C
1990-01-12
The latest understanding of nuclear winter is reviewed. Considerable progress has been made in quantifying the production and injection of soot by large-scale fires, the regional and global atmospheric dispersion of the soot, and the resulting physical, environmental, and climatic perturbations. New information has been obtained from laboratory studies, field experiments, and numerical modeling on a variety of scales (plume, mesoscale, and global). For the most likely soot injections from a full-scale nuclear exchange, three-dimensional climate simulations yield midsummer land temperature decreases that average 10 degrees to 20 degrees C in northern mid-latitudes, with local cooling as large as 35 degrees C, and subfreezing summer temperatures in some regions. Anomalous atmospheric circulations caused by solar heating of soot is found to stabilize the upper atmosphere against overturning, thus increasing the soot lifetime, and to accelerate interhemispheric transport, leading to persistent effects in the Southern Hemisphere. Serious new environmental problems associated with soot injection have been identified, including disruption of monsoon precipitation and severe depletion of the stratospheric ozone layer in the Northern Hemisphere. The basic physics of nuclear winter has been reaffirmed through several authoritative international technical assessments and numerous individual scientific investigations. Remaining areas of uncertainty and research priorities are discussed in view of the latest findings.
NASA Astrophysics Data System (ADS)
Neggers, Roel
2016-04-01
Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.
Dark energy and modified gravity in the Effective Field Theory of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Cusin, Giulia; Lewandowski, Matthew; Vernizzi, Filippo
2018-04-01
We develop an approach to compute observables beyond the linear regime of dark matter perturbations for general dark energy and modified gravity models. We do so by combining the Effective Field Theory of Dark Energy and Effective Field Theory of Large-Scale Structure approaches. In particular, we parametrize the linear and nonlinear effects of dark energy on dark matter clustering in terms of the Lagrangian terms introduced in a companion paper [1], focusing on Horndeski theories and assuming the quasi-static approximation. The Euler equation for dark matter is sourced, via the Newtonian potential, by new nonlinear vertices due to modified gravity and, as in the pure dark matter case, by the effects of short-scale physics in the form of the divergence of an effective stress tensor. The effective fluid introduces a counterterm in the solution to the matter continuity and Euler equations, which allows a controlled expansion of clustering statistics on mildly nonlinear scales. We use this setup to compute the one-loop dark-matter power spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
Modelling disease outbreaks in realistic urban social networks
NASA Astrophysics Data System (ADS)
Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan
2004-05-01
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.
Slepian, Zachary; Slosar, Anze; Eisenstein, Daniel J.; ...
2017-10-24
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv <0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of Baryon Acoustic Oscillation (BAO) method measurements of the cosmic distance scale using the 2-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3% rms in the distance scale inferred from the BAO feature in the BOSS 2-point clustering, well belowmore » the 1% statistical error of this measurement. In conclusion, this constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as DESI to self-protect against the relative velocity as a possible systematic.« less
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.; Blazek, Jonathan A.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McEwen, Joseph E.; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana
2018-02-01
We search for a galaxy clustering bias due to a modulation of galaxy number with the baryon-dark matter relative velocity resulting from recombination-era physics. We find no detected signal and place the constraint bv < 0.01 on the relative velocity bias for the CMASS galaxies. This bias is an important potential systematic of baryon acoustic oscillation (BAO) method measurements of the cosmic distance scale using the two-point clustering. Our limit on the relative velocity bias indicates a systematic shift of no more than 0.3 per cent rms in the distance scale inferred from the BAO feature in the BOSS two-point clustering, well below the 1 per cent statistical error of this measurement. This constraint is the most stringent currently available and has important implications for the ability of upcoming large-scale structure surveys such as the Dark Energy Spectroscopic Instrument (DESI) to self-protect against the relative velocity as a possible systematic.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng
1998-03-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thatcher, Diana R.; Jablonowski, Christiane
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
A moist aquaplanet variant of the Held–Suarez test for atmospheric model dynamical cores
Thatcher, Diana R.; Jablonowski, Christiane
2016-04-04
A moist idealized test case (MITC) for atmospheric model dynamical cores is presented. The MITC is based on the Held–Suarez (HS) test that was developed for dry simulations on “a flat Earth” and replaces the full physical parameterization package with a Newtonian temperature relaxation and Rayleigh damping of the low-level winds. This new variant of the HS test includes moisture and thereby sheds light on the nonlinear dynamics–physics moisture feedbacks without the complexity of full-physics parameterization packages. In particular, it adds simplified moist processes to the HS forcing to model large-scale condensation, boundary-layer mixing, and the exchange of latent and sensible heat betweenmore » the atmospheric surface and an ocean-covered planet. Using a variety of dynamical cores of the National Center for Atmospheric Research (NCAR)'s Community Atmosphere Model (CAM), this paper demonstrates that the inclusion of the moist idealized physics package leads to climatic states that closely resemble aquaplanet simulations with complex physical parameterizations. This establishes that the MITC approach generates reasonable atmospheric circulations and can be used for a broad range of scientific investigations. This paper provides examples of two application areas. First, the test case reveals the characteristics of the physics–dynamics coupling technique and reproduces coupling issues seen in full-physics simulations. In particular, it is shown that sudden adjustments of the prognostic fields due to moist physics tendencies can trigger undesirable large-scale gravity waves, which can be remedied by a more gradual application of the physical forcing. Second, the moist idealized test case can be used to intercompare dynamical cores. These examples demonstrate the versatility of the MITC approach and suggestions are made for further application areas. Furthermore, the new moist variant of the HS test can be considered a test case of intermediate complexity.« less
Surface Ocean-Lower Atmosphere Studies: SOLAS
NASA Astrophysics Data System (ADS)
Wanninkhof, R.; Dickerson, R.; Barber, R.; Capone, D. G.; Duce, R.; Erickson, D.; Keene, W. C.; Lenschow, D.; Matrai, P. A.; McGillis, W.; McGillicuddy, D.; Penner, J.; Pszenny, A.
2002-05-01
The US Surface Ocean - Lower Atmosphere Study (US SOLAS) is a component of an international program (SOLAS) with an overall goal: to achieve a quantitative understanding of the key biogeochemical-physical interactions between the ocean and atmosphere, and of how this coupled system affects and is affected by climateand environmental change. There is increasing evidence that the biogeochemical cycles containing the building blocks of life such as carbon, nitrogen, and sulfur have been perturbed. These changes result in appreciable impacts and feedbacks in the SOLA region. The exact nature of the impacts and feedbacks are poorly constrained because of sparse observations, in particular relating to the connectivity and interrelationships between the major biogeochemical cycles and their interaction with physical forcing. It is in these areas that the research and the interdisciplinary research approaches advocated in US SOLAS will provide high returns. The research in US SOLAS will be heavily focused on process studies of the natural variability of key processes, anthropogenic perturbation of the processes, and the positive and negative feedbacks the processes will have on the biogeochemical cycles in the SOLA region. A major objective is to integrate the process study findings with the results from large-scale observations and with small and large- scale modeling and remote sensing efforts to improve our mechanistic understanding of large scale biogeochemical and physical phenomena and feedbacks. US SOLAS held an open workshop in May 2001 to lay the groundwork for the SOLAS program in the United States. Resulting highlights and issues will be summarized around 4 major themes: (1) Boundary-layer Physics, (2) Dynamics of long-lived climate relevant compounds, (3) Dynamics of short-lived climate relevant compounds, and (4) Atmospheric effects on marine biogeochemical processes. Comprehensive reports from the working groups of U.S. SOLAS, and the international science plan which served as overall guidance, can be found at We will explore possible dedicated, interdisciplinary ocean-atmosphere projects as examples of the critical interconnectivity of atmospheric, interfacial, and upper ocean processes to study phenomena of critical importance in understanding the earth's system.
Understanding metropolitan patterns of daily encounters.
Sun, Lijun; Axhausen, Kay W; Lee, Der-Horng; Huang, Xianfeng
2013-08-20
Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters' bounded nature. An individual's encounter capability is rooted in his/her daily behavioral regularity, explaining the emergence of "familiar strangers" in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or "structure of co-presence" across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and--particularly--disclosing the impact of human behavior on various diffusion/spreading processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
2014-07-31
Large-scale magnetic fields have been observed in widely different types of astrophysical objects. These magnetic fields are believed to be caused by the so-called dynamo effect. Could a large-scale magnetic field grow out of turbulence (i.e. the alpha dynamo effect)? How could the topological properties and the complexity of magnetic field as a global quantity, the so called magnetic helicity, be important in the dynamo effect? In addition to understanding the dynamo mechanism in astrophysical accretion disks, anomalous angular momentum transport has also been a longstanding problem in accretion disks and laboratory plasmas. To investigate both dynamo and momentum transport,more » we have performed both numerical modeling of laboratory experiments that are intended to simulate nature and modeling of configurations with direct relevance to astrophysical disks. Our simulations use fluid approximations (Magnetohydrodynamics - MHD model), where plasma is treated as a single fluid, or two fluids, in the presence of electromagnetic forces. Our major physics objective is to study the possibility of magnetic field generation (so called MRI small-scale and large-scale dynamos) and its role in Magneto-rotational Instability (MRI) saturation through nonlinear simulations in both MHD and Hall regimes.« less
Understanding metropolitan patterns of daily encounters
Sun, Lijun; Axhausen, Kay W.; Lee, Der-Horng; Huang, Xianfeng
2013-01-01
Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters’ bounded nature. An individual’s encounter capability is rooted in his/her daily behavioral regularity, explaining the emergence of “familiar strangers” in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or “structure of co-presence” across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and—particularly—disclosing the impact of human behavior on various diffusion/spreading processes. PMID:23918373
McCrorie, Paul; Walker, David; Ellaway, Anne
2018-04-30
Large-scale primary data collections are complex, costly, and time-consuming. Study protocols for trial-based research are now commonplace, with a growing number of similar pieces of work being published on observational research. However, useful additions to the literature base are publications that describe the issues and challenges faced while conducting observational studies. These can provide researchers with insightful knowledge that can inform funding proposals or project development work. In this study, we identify and reflectively discuss the unforeseen or often unpublished issues associated with organizing and implementing a large-scale objectively measured physical activity and global positioning system (GPS) data collection. The SPACES (Studying Physical Activity in Children's Environments across Scotland) study was designed to collect objectively measured physical activity and GPS data from 10- to 11-year-old children across Scotland, using a postal delivery method. The 3 main phases of the project (recruitment, delivery of project materials, and data collection and processing) are described within a 2-stage framework: (1) intended design and (2) implementation of the intended design. Unanticipated challenges arose, which influenced the data collection process; these encompass four main impact categories: (1) cost, budget, and funding; (2) project timeline; (3) participation and engagement; and (4) data challenges. The main unforeseen issues that impacted our timeline included the informed consent process for children under the age of 18 years; the use of, and coordination with, the postal service to deliver study information and equipment; and the variability associated with when participants began data collection and the time taken to send devices and consent forms back (1-12 months). Unanticipated budgetary issues included the identification of some study materials (AC power adapter) not fitting through letterboxes, as well as the employment of fieldworkers to increase recruitment and the return of consent forms. Finally, we encountered data issues when processing physical activity and GPS data that had been initiated across daylight saving time. We present learning points and recommendations that may benefit future studies of similar methodology in their early stages of development. ©Paul McCrorie, David Walker, Anne Ellaway. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 30.04.2018.
Characterization of microbial 'hot spots' in soils": Where are we, and where are we going?
NASA Astrophysics Data System (ADS)
Baveye, Philippe C.
2015-04-01
Fifty years ago, microbiologists realized that significant progress in our understanding of microbial processes in soils required being able to measure various physical, chemical, and microbial parameters at the scale of microorganisms, i.e., at micrometric or even submicrometric scales, and to identify areas of particularly high microbial activity. Back then, this was only a dream, severely hampered by the crudeness of our measuring instruments. In the intervening years, however, amazing technological progress has transformed that old dream into reality. We are now able to quantify the physical and (bio)chemical environment of soil microorganisms at spatial scales that are commensurate with bacterial cells. In this invited presentation, I will provide an overview of the significant progress achieved in this field over the last few years, and mention a number of further technological advances that are likely to profoundly influence the nature of the research over the next decade. Technology must however remain a means to an end, and therefore it is important to firmly keep in mind that the goal of the research on understanding better how soil processes work at the microscale is to be ultimately in a position to predict the behavior of soils at scales that matter to society at large, for example in terms of food security or global climate change. In that context, part of the research has to focus on how we can upscale information about soil microbial hotspots to macroscopic scales and beyond. I will discuss where we stand on this crucial question, which remains largely open at the moment.
The Eagle Nebula: a spectral template for star forming regions
NASA Astrophysics Data System (ADS)
Flagey, Nicolas; Boulanger, Francois; Carey, Sean; Compiegne, Mathieu; Dwek, Eli; Habart, Emilie; Indebetouw, Remy; Montmerle, Thierry; Noriega-Crespo, Alberto
2008-03-01
IRAC and MIPS have revealed spectacular images of massive star forming regions in the Galaxy. These vivid illustrations of the interaction between the stars, through their winds and radiation, and their environment, made of gas and dust, still needs to be explained. The large scale picture of layered shells of gas components, is affected by the small scale interaction of stars with the clumpy medium that surrounds them. To understand spatial variations of physical conditions and dust properties on small scales, spectroscopic imaging observations are required on a nearby object. The iconic Eagle Nebula (M16) is one of the nearest and most observed star forming region of our Galaxy and as such, is a well suited template to obtain this missing data set. We thus propose a complete spectral map of the Eagle Nebula (M16) with the IRS/Long Low module (15-38 microns) and MIPS/SED mode (55-95 microns). Analysis of the dust emission, spectral features and continuum, and of the H2 and fine-structure gas lines within our models will provide us with constraints on the physical conditions (gas ionization state, pressure, radiation field) and dust properties (temperature, size distribution) at each position within the nebula. Only such a spatially and spectrally complete map will allow us to characterize small scale structure and dust evolution within the global context and understand the impact of small scale structure on the evolution of dusty star forming regions. This project takes advantage of the unique ability of IRS at obtaining sensitive spectral maps covering large areas.
NASA Technical Reports Server (NTRS)
Nitta, Nariaki; Bruner, Marilyn E.; Saba, Julia; Strong, Keith; Harvey, Karen
2000-01-01
The subject of this investigation is to study the physics of the solar corona through the analysis of the EUV and UV data produced by two flights (12 May 1992 and 25 April 1994) of the Lockheed Solar Plasma Diagnostics Experiment (SPDE) sounding rocket payload, in combination with Yohkoh and ground-based data. Each rocket flight produced both spectral and imaging data. These joint datasets are useful for understanding the physical state of various features in the solar atmosphere at different heights ranging from the photosphere to the corona at the time of the, rocket flights, which took place during the declining phase of a solar cycle, 2-4 years before the minimum. The investigation is narrowly focused on comparing the physics of small- and medium-scale strong-field structures with that of large-scale, weak fields. As we close th is investigation, we have to recall that our present position in the understanding of basic solar physics problems (such as coronal heating) is much different from that in 1995 (when we proposed this investigation), due largely to the great success of SOHO and TRACE. In other words, several topics and techniques we proposed can now be better realized with data from these missions. For this reason, at some point of our work, we started concentrating on the 1992 data, which are more unique and have more supporting data. As a result, we discontinued the investigation on small-scale structures, i.e., bright points, since high-resolution TRACE images have addressed more important physics than SPDE EUV images could do. In the final year, we still spent long time calibrating the 1992 data. The work was complicated because of the old-fashioned film, which had problems not encountered with more modern CCD detectors. After our considerable effort on calibration, we were able to focus on several scientific topics, relying heavily on the SPDE UV images. They include the relation between filaments and filament channels, the identification of hot loops, and the physical conditions of such loops especially at their foot-points. A total of four papers were completed from this contract which are listed in the last section.
Wang, Kai; Liu, Jianjun
2017-09-29
City parks, important environments built for physical activity, play critical roles in preventing chronic diseases and promoting public health. We used five commonly used park indicators to investigate the spatiotemporal trend of city parks in mainland China between 1981 and 2014 at three scales: national, provincial and city class. City parks in China increased significantly with a turning point occurring around the year 2000. Up until the end of 2014, there were 13,074 city parks totaling 367,962 ha with 0.29 parks per 10,000 residents, 8.26 m² of park per capita and 2.00% of parkland as a percentage of urban area. However, there is still a large gap compared to the established American and Japanese city park systems, and only 5.4% of people aged above 20 access city parks for physical activity. The low number of parks per 10,000 residents brings up the issue of the accessibility to physical activity areas that public parks provide. The concern of spatial disparity, also apparent for all five city park indicators, differed strongly at provincial and city class scales. The southern and eastern coastal provinces of Guangdong, Fujian, Zhejiang and Shandong have abundant city park resources. At the scale of the city classes, mega-city II had the highest of the three ratio indicators and the large city class had the lowest. On one hand, the leading province Guangdong and its mega-cities Shenzhen and Dongguan had park indicators comparable to the United States and Japan. On the other hand, there were still five cities with no city parks and many cities with extremely low park indicators. In China, few cities have realized the importance of city parks for the promotion of leisure time physical activity. It is urgent that state and city park laws or guidelines are passed that can serve as baselines for planning a park system and determining a minimum standard for city parks with free, accessible and safe physical activity areas and sports facilities.
Wang, Kai; Liu, Jianjun
2017-01-01
City parks, important environments built for physical activity, play critical roles in preventing chronic diseases and promoting public health. We used five commonly used park indicators to investigate the spatiotemporal trend of city parks in mainland China between 1981 and 2014 at three scales: national, provincial and city class. City parks in China increased significantly with a turning point occurring around the year 2000. Up until the end of 2014, there were 13,074 city parks totaling 367,962 ha with 0.29 parks per 10,000 residents, 8.26 m2 of park per capita and 2.00% of parkland as a percentage of urban area. However, there is still a large gap compared to the established American and Japanese city park systems, and only 5.4% of people aged above 20 access city parks for physical activity. The low number of parks per 10,000 residents brings up the issue of the accessibility to physical activity areas that public parks provide. The concern of spatial disparity, also apparent for all five city park indicators, differed strongly at provincial and city class scales. The southern and eastern coastal provinces of Guangdong, Fujian, Zhejiang and Shandong have abundant city park resources. At the scale of the city classes, mega-city II had the highest of the three ratio indicators and the large city class had the lowest. On one hand, the leading province Guangdong and its mega-cities Shenzhen and Dongguan had park indicators comparable to the United States and Japan. On the other hand, there were still five cities with no city parks and many cities with extremely low park indicators. In China, few cities have realized the importance of city parks for the promotion of leisure time physical activity. It is urgent that state and city park laws or guidelines are passed that can serve as baselines for planning a park system and determining a minimum standard for city parks with free, accessible and safe physical activity areas and sports facilities. PMID:28961182
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Large angular scale CMB anisotropy from an excited initial mode
NASA Astrophysics Data System (ADS)
Sojasi, A.; Mohsenzadeh, M.; Yusofi, E.
2016-07-01
According to inflationary cosmology, the CMB anisotropy gives an opportunity to test predictions of new physics hypotheses. The initial state of quantum fluctuations is one of the important options at high energy scale, as it can affect observables such as the CMB power spectrum. In this study a quasi-de Sitter inflationary background with approximate de Sitter mode function built over the Bunch-Davies mode is applied to investigate the scale-dependency of the CMB anisotropy. The recent Planck constraint on spectral index motivated us to examine the effect of a new excited mode function (instead of pure de Sitter mode) on the CMB anisotropy at large angular scales. In so doing, it is found that the angular scale-invariance in the CMB temperature fluctuations is broken and in the limit ℓ < 200 a tiny deviation appears. Also, it is shown that the power spectrum of CMB anisotropy is dependent on a free parameter with mass dimension H << M * < M p and on the slow-roll parameter ɛ. Supported by the Islamic Azad University, Rasht Branch, Rasht, Iran
Renormalization-group flow of the effective action of cosmological large-scale structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos
Following an approach of Matarrese and Pietroni, we derive the functional renormalization group (RG) flow of the effective action of cosmological large-scale structures. Perturbative solutions of this RG flow equation are shown to be consistent with standard cosmological perturbation theory. Non-perturbative approximate solutions can be obtained by truncating the a priori infinite set of possible effective actions to a finite subspace. Using for the truncated effective action a form dictated by dissipative fluid dynamics, we derive RG flow equations for the scale dependence of the effective viscosity and sound velocity of non-interacting dark matter, and we solve them numerically. Physically,more » the effective viscosity and sound velocity account for the interactions of long-wavelength fluctuations with the spectrum of smaller-scale perturbations. We find that the RG flow exhibits an attractor behaviour in the IR that significantly reduces the dependence of the effective viscosity and sound velocity on the input values at the UV scale. This allows for a self-contained computation of matter and velocity power spectra for which the sensitivity to UV modes is under control.« less
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
Lattice QCD Calculations in Nuclear Physics towards the Exascale
NASA Astrophysics Data System (ADS)
Joo, Balint
2017-01-01
The combination of algorithmic advances and new highly parallel computing architectures are enabling lattice QCD calculations to tackle ever more complex problems in nuclear physics. In this talk I will review some computational challenges that are encountered in large scale cold nuclear physics campaigns such as those in hadron spectroscopy calculations. I will discuss progress in addressing these with algorithmic improvements such as multi-grid solvers and software for recent hardware architectures such as GPUs and Intel Xeon Phi, Knights Landing. Finally, I will highlight some current topics for research and development as we head towards the Exascale era This material is funded by the U.S. Department of Energy, Office Of Science, Offices of Nuclear Physics, High Energy Physics and Advanced Scientific Computing Research, as well as the Office of Nuclear Physics under contract DE-AC05-06OR23177.
Uncovering the Hidden Meaning of Cross-Curriculum Comparison Results on the Force Concept Inventory
ERIC Educational Resources Information Center
Ding, Lin; Caballero, Marcos D.
2014-01-01
In a recent study, Caballero and colleagues conducted a large-scale evaluation using the Force Concept Inventory (FCI) to compare student learning outcomes between two introductory physics curricula: the Matter and Interactions (M&I) mechanics course and a pedagogically-reformed-traditional-content (PRTC) mechanics course. Using a conventional…
Childhood Abuse and Later Parenting Outcomes in Two American Indian Tribes
ERIC Educational Resources Information Center
Libby, Anne M.; Orton, Heather D.; Beals, Janette; Buchwald, Dedra; Manson, Spero M.
2008-01-01
Objectives: To examine the relationship of childhood physical and sexual abuse with reported parenting satisfaction and parenting role impairment later in life among American Indians (AIs). Methods: AIs from Southwest and Northern Plains tribes who participated in a large-scale community-based study (n=3,084) were asked about traumatic events and…
Low-severity fire increases tree defense against bark beetle attacks
Sharon Hood; Anna Sala; Emily K. Heyerdahl; Marion Boutin
2015-01-01
Induced defense is a common plant strategy in response to herbivory. Although abiotic damage, such as physical wounding, pruning, and heating, can induce plant defense, the effect of such damage by large-scale abiotic disturbances on induced defenses has not been explored and could have important consequences for plant survival facing future biotic...
Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories
ERIC Educational Resources Information Center
Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher
2009-01-01
Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…
Deahn M. Donner; Christine A. Ribic; Albert J. Beck; Dale Higgins; Dan Eklund; Susan Reinecke
2015-01-01
Woodland ponds are important landscape features that help sustain populations of amphibians that require this aquatic habitat for successful reproduction. Species abundance patterns often reflect site-specific differences in hydrology, physical characteristics, and surrounding vegetation. Large-scale processes such as changing land cover and environmental conditions...
USDA-ARS?s Scientific Manuscript database
Process evaluations of large-scale school based programs are necessary to aid in the interpretation of the outcome data. The Louisiana Health (LA Health) study is a multi-component childhood obesity prevention study for middle school children. The Physical Education (PEQ), Intervention (IQ), and F...
The Use of ICT In Teaching Tertiary Physics: Technology and Pedagogy
ERIC Educational Resources Information Center
Nguyen, Nhung; Williams, John; Nguyen, Tuan
2012-01-01
In the light of the education reform driven by Vietnam's government, information communication technologies (ICTs) are becoming integrated into education, while concurrently, teaching approaches are shifting from teacher-centred to student-centred in Vietnam's universities. The innovation is top-down and is being applied on a large scale. Emerging…
Olga A. Kildisheva; R. Kasten Dumroese; Anthony S. Davis
2013-01-01
Physically dormant seeds of Munro's globemallow (Sphaeralcea munroana (Douglas) Spach [Malvaceae]) were scarified by boiling, tumbling, burning, dry-heating, and burning + heating treatments in an attempt to find an effective, operational, largescale treatment for nurseries and restoration activities. Results indicate that out of the tested treatments, seed...
PIPER and Polarized Galactic Foregrounds
NASA Technical Reports Server (NTRS)
Chuss, David
2009-01-01
In addition to probing inflationary cosmology, PIPER will measure the polarized dust emission from the Galaxy. PIPER will be capable of full (I,0,U,V) measurement over four frequency bands ' These measurements will provide insight into the physics of dust grains and a probe of the Galactic magnetic field on large and intermediate scales.
Perceived Energy for Parenting: A New Conceptualization and Scale
ERIC Educational Resources Information Center
Janisse, Heather C.; Barnett, Douglas; Nies, Mary A.
2009-01-01
Parenting may be the most physically and mentally demanding social role people encounter during their life. Personal resources are essential to child rearing, yet perceptions of parenting energy have been largely unexplored. This manuscript reports on the need for and development of a measure of perceived energy for parenting (PEP), as well as a…
Global climatology of explosive cyclones
NASA Astrophysics Data System (ADS)
Balcerak, Ernie
2013-03-01
Explosive cyclones, which have rapidly intensifying winds and heavy rain, can seriously threaten life and property. These "meteorological bombs" are difficult to forecast, in part because scientists need a better understanding of the physical mechanisms by which they form. In particular, the large-scale circulation conditions that may contribute to explosive cyclone formation are not well understood.
Current challenges in fundamental physics
NASA Astrophysics Data System (ADS)
Egana Ugrinovic, Daniel
The discovery of the Higgs boson at the Large Hadron Collider completed the Standard Model of particle physics. The Standard Model is a remarkably successful theory of fundamental physics, but it suffers from severe problems. It does not provide an explanation for the origin or stability of the electroweak scale nor for the origin and structure of flavor and CP violation. It predicts vanishing neutrino masses, in disagreement with experimental observations. It also fails to explain the matter-antimatter asymmetry of the universe, and it does not provide a particle candidate for dark matter. In this thesis we provide experimentally testable solutions for most of these problems and we study their phenomenology.
Receptor signaling clusters in the immune synapse(in eng)
Dustin, Michael L.; Groves, Jay T.
2012-02-23
Signaling processes between various immune cells involve large-scale spatial reorganization of receptors and signaling molecules within the cell-cell junction. These structures, now collectively referred to as immune synapses, interleave physical and mechanical processes with the cascades of chemical reactions that constitute signal transduction systems. Molecular level clustering, spatial exclusion, and long-range directed transport are all emerging as key regulatory mechanisms. The study of these processes is drawing researchers from physical sciences to join the effort and represents a rapidly growing branch of biophysical chemistry. Furthermore, recent advances in physical and quantitative analyses of signaling within the immune synapses are reviewedmore » here.« less
Receptor signaling clusters in the immune synapse (in eng)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dustin, Michael L.; Groves, Jay T.
2012-02-23
Signaling processes between various immune cells involve large-scale spatial reorganization of receptors and signaling molecules within the cell-cell junction. These structures, now collectively referred to as immune synapses, interleave physical and mechanical processes with the cascades of chemical reactions that constitute signal transduction systems. Molecular level clustering, spatial exclusion, and long-range directed transport are all emerging as key regulatory mechanisms. The study of these processes is drawing researchers from physical sciences to join the effort and represents a rapidly growing branch of biophysical chemistry. Furthermore, recent advances in physical and quantitative analyses of signaling within the immune synapses are reviewedmore » here.« less
What does physics have to do with cancer?
Michor, Franziska; Liphardt, Jan; Ferrari, Mauro; Widom, Jonathan
2013-01-01
Large-scale cancer genomics, proteomics and RNA-sequencing efforts are currently mapping in fine detail the genetic and biochemical alterations that occur in cancer. However, it is becoming clear that it is difficult to integrate and interpret these data and to translate them into treatments. This difficulty is compounded by the recognition that cancer cells evolve, and that initiation, progression and metastasis are influenced by a wide variety of factors. To help tackle this challenge, the US National Cancer Institute Physical Sciences-Oncology Centers initiative is bringing together physicists, cancer biologists, chemists, mathematicians and engineers. How are we beginning to address cancer from the perspective of the physical sciences? PMID:21850037
Fast propagation of electromagnetic fields through graded-index media.
Zhong, Huiying; Zhang, Site; Shi, Rui; Hellmann, Christian; Wyrowski, Frank
2018-04-01
Graded-index (GRIN) media are widely used for modeling different situations: some components are designed considering GRIN modulation, e.g., multi-mode fibers, optical lenses, or acousto-optical modulators; on the other hand, there are other components where the refractive-index variation is undesired due to, e.g., stress or heating; and finally, some effects in nature are characterized by a GRIN variation, like turbulence in air or biological tissues. Modeling electromagnetic fields propagating in GRIN media is then of high importance for optical simulation and design. Though ray tracing can be used to evaluate some basic effects in GRIN media, the field properties are not considered and evaluated. The general physical optics techniques, like finite element method or finite difference time domain, can be used to calculate fields in GRIN media, but they need great numerical effort or may even be impractical for large-scale components. Therefore, there still exists a demand for a fast physical optics model of field propagation through GRIN media on a large scale, which will be explored in this paper.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua
2017-12-01
Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Microphysics in the Multi-Scale Modeling Systems with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.
2011-01-01
In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.
Nonideal Rayleigh–Taylor mixing
Lim, Hyunkyung; Iwerks, Justin; Glimm, James; Sharp, David H.
2010-01-01
Rayleigh–Taylor mixing is a classical hydrodynamic instability that occurs when a light fluid pushes against a heavy fluid. The two main sources of nonideal behavior in Rayleigh–Taylor (RT) mixing are regularizations (physical and numerical), which produce deviations from a pure Euler equation, scale invariant formulation, and nonideal (i.e., experimental) initial conditions. The Kolmogorov theory of turbulence predicts stirring at all length scales for the Euler fluid equations without regularization. We interpret mathematical theories of existence and nonuniqueness in this context, and we provide numerical evidence for dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when modeled by Euler equations. Operationally, indeterminacy shows up as nonunique solutions for RT mixing, parametrized by Schmidt and Prandtl numbers, in the large Reynolds number (Euler equation) limit. Verification and validation evidence is presented for the large eddy simulation algorithm used here. Mesh convergence depends on breaking the nonuniqueness with explicit use of the laminar Schmidt and Prandtl numbers and their turbulent counterparts, defined in terms of subgrid scale models. The dependence of the mixing rate on the Schmidt and Prandtl numbers and other physical parameters will be illustrated. We demonstrate numerically the influence of initial conditions on the mixing rate. Both the dominant short wavelength initial conditions and long wavelength perturbations are observed to play a role. By examination of two classes of experiments, we observe the absence of a single universal explanation, with long and short wavelength initial conditions, and the various physical and numerical regularizations contributing in different proportions in these two different contexts. PMID:20615983
Physical modeling in geomorphology: are boundary conditions necessary?
NASA Astrophysics Data System (ADS)
Cantelli, A.
2012-12-01
Referring to the physical experimental design in geomorphology, boundary conditions are key elements that determine the quality of the results and therefore the study development. For years engineers have modeled structures, such as dams and bridges, with high precision and excellent results. Until the last decade, a great part of the physical experimental work in geomorphology has been developed with an engineer-like approach, requiring an accurate scaling analysis to determine inflow parameters and initial geometrical conditions. However, during the last decade, the way we have been approaching physical experiments has significantly changed. In particular, boundary conditions and initial conditions are considered unknown factors that need to be discovered during the experiment. This new philosophy leads to a more demanding data acquisition process but relaxes the obligation to a priori know the appropriate input and initial conditions and provides the flexibility to discover those data. Here I am going to present some practical examples of this experimental approach in deepwater geomorphology; some questions about scaling of turbidity currents and a new large experimental facility built at the Universidade Federal do Rio Grande do Sul, Brasil.
Physical descriptions of the bacterial nucleoid at large scales, and their biological implications
NASA Astrophysics Data System (ADS)
Benza, Vincenzo G.; Bassetti, Bruno; Dorfman, Kevin D.; Scolari, Vittore F.; Bromek, Krystyna; Cicuta, Pietro; Cosentino Lagomarsino, Marco
2012-07-01
Recent experimental and theoretical approaches have attempted to quantify the physical organization (compaction and geometry) of the bacterial chromosome with its complement of proteins (the nucleoid). The genomic DNA exists in a complex and dynamic protein-rich state, which is highly organized at various length scales. This has implications for modulating (when not directly enabling) the core biological processes of replication, transcription and segregation. We overview the progress in this area, driven in the last few years by new scientific ideas and new interdisciplinary experimental techniques, ranging from high space- and time-resolution microscopy to high-throughput genomics employing sequencing to map different aspects of the nucleoid-related interactome. The aim of this review is to present the wide spectrum of experimental and theoretical findings coherently, from a physics viewpoint. In particular, we highlight the role that statistical and soft condensed matter physics play in describing this system of fundamental biological importance, specifically reviewing classic and more modern tools from the theory of polymers. We also discuss some attempts toward unifying interpretations of the current results, pointing to possible directions for future investigation.
Emperical Laws in Economics Uncovered Using Methods in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2001-06-01
In recent years, statistical physicists and computational physicists have determined that physical systems which consist of a large number of interacting particles obey universal "scaling laws" that serve to demonstrate an intrinsic self-similarity operating in such systems. Further, the parameters appearing in these scaling laws appear to be largely independent of the microscopic details. Since economic systems also consist of a large number of interacting units, it is plausible that scaling theory can be usefully applied to economics. To test this possibility using realistic data sets, a number of scientists have begun analyzing economic data using methods of statistical physics [1]. We have found evidence for scaling (and data collapse), as well as universality, in various quantities, and these recent results will be reviewed in this talk--starting with the most recent study [2]. We also propose models that may lead to some insight into these phenomena. These results will be discussed, as well as the overall rationale for why one might expect scaling principles to hold for complex economic systems. This work on which this talk is based is supported by BP, and was carried out in collaboration with L. A. N. Amaral S. V. Buldyrev, D. Canning, P. Cizeau, X. Gabaix, P. Gopikrishnan, S. Havlin, Y. Lee, Y. Liu, R. N. Mantegna, K. Matia, M. Meyer, C.-K. Peng, V. Plerou, M. A. Salinger, and M. H. R. Stanley. [1.] See, e.g., R. N. Mantegna and H. E. Stanley, Introduction to Econophysics: Correlations & Complexity in Finance (Cambridge University Press, Cambridge, 1999). [2.] P. Gopikrishnan, B. Rosenow, V. Plerou, and H. E. Stanley, "Identifying Business Sectors from Stock Price Fluctuations," e-print cond-mat/0011145; V. Plerou, P. Gopikrishnan, L. A. N. Amaral, X. Gabaix, and H. E. Stanley, "Diffusion and Economic Fluctuations," Phys. Rev. E (Rapid Communications) 62, 3023-3026 (2000); P. Gopikrishnan, V. Plerou, X. Gabaix, and H. E. Stanley, "Statistical Properties of Share Volume Traded in Financial Markets," Phys. Rev. E (Rapid Communications) 62, 4493-4496 (2000).
The Role of Forests in Regulating the River Flow Regime of Large Basins of the World
NASA Astrophysics Data System (ADS)
Salazar, J. F.; Villegas, J. C.; Mercado-Bettin, D. A.; Rodríguez, E.
2016-12-01
Many natural and social phenomena depend on river flow regimes that are being altered by global change. Understanding the mechanisms behind such alterations is crucial for predicting river flow regimes in a changing environment. Here we explore potential linkages between the presence of forests and the capacity of river basins for regulating river flows. Regulation is defined here as the capacity of river basins to attenuate the amplitude of the river flow regime, that is to reduce the difference between high and low flows. We first use scaling theory to show how scaling properties of observed river flows can be used to classify river basins as regulated or unregulated. This parsimonious classification is based on a physical interpretation of the scaling properties (particularly the scaling exponents) that is novel (most previous studies have focused on the interpretation of the scaling exponents for floods only), and widely-applicable to different basins (the only assumption is that river flows in a given river basin exhibit scaling properties through well-known power laws). Then we show how this scaling framework can be used to explore global-change-induced temporal variations in the regulation capacity of river basins. Finally, we propose a conceptual hypothesis (the "Forest reservoir concept") to explain how large-scale forests can exert important effects on the long-term water balance partitioning and regulation capacity of large basins of the world. Our quantitative results are based on data analysis (river flows and land cover features) from 22 large basins of the world, with emphasis in the Amazon river and its main tributaries. Collectively, our findings support the hypothesis that forest cover enhances the capacity of large river basins to maintain relatively high mean river flows, as well as to regulate (ameliorate) extreme river flows. Advancing towards this quantitative understanding of the relation between forest cover and river flow regimes is crucial for water management- and land cover-related decisions.
The Role of Forests in Regulating the River Flow Regime of Large Basins of the World
NASA Astrophysics Data System (ADS)
Salazar, J. F.; Villegas, J. C.; Mercado-Bettin, D. A.; Rodríguez, E.
2017-12-01
Many natural and social phenomena depend on river flow regimes that are being altered by global change. Understanding the mechanisms behind such alterations is crucial for predicting river flow regimes in a changing environment. Here we explore potential linkages between the presence of forests and the capacity of river basins for regulating river flows. Regulation is defined here as the capacity of river basins to attenuate the amplitude of the river flow regime, that is to reduce the difference between high and low flows. We first use scaling theory to show how scaling properties of observed river flows can be used to classify river basins as regulated or unregulated. This parsimonious classification is based on a physical interpretation of the scaling properties (particularly the scaling exponents) that is novel (most previous studies have focused on the interpretation of the scaling exponents for floods only), and widely-applicable to different basins (the only assumption is that river flows in a given river basin exhibit scaling properties through well-known power laws). Then we show how this scaling framework can be used to explore global-change-induced temporal variations in the regulation capacity of river basins. Finally, we propose a conceptual hypothesis (the "Forest reservoir concept") to explain how large-scale forests can exert important effects on the long-term water balance partitioning and regulation capacity of large basins of the world. Our quantitative results are based on data analysis (river flows and land cover features) from 22 large basins of the world, with emphasis in the Amazon river and its main tributaries. Collectively, our findings support the hypothesis that forest cover enhances the capacity of large river basins to maintain relatively high mean river flows, as well as to regulate (ameliorate) extreme river flows. Advancing towards this quantitative understanding of the relation between forest cover and river flow regimes is crucial for water management- and land cover-related decisions.
Losekam, Stefanie; Goetzky, Benjamin; Kraeling, Svenja; Rief, Winfried; Hilbert, Anja
2010-08-01
To examine self-reported physical activity with regard to weight teasing and self-efficacy. Within a cross-sectional study, 321 overweight and normal-weight students, consisting of 51% girls (n = 161) and 49% boys (n = 160) at a mean age of 12.22 years (SD = 1.07), were sampled from German secondary schools. The Perception of Teasing Scale, the Physical Self-Efficacy Scale, and the Leipzig Lifestyle Questionnaire for Adolescents were used to assess experiences with weight-related teasing, self-efficacy, physical activity and social context variables. Self-efficacy, weight teasing and social context variables were related to physical activity within the full sample (R(2) = 0.433). More frequent weight teasing was associated with decreased physical activity in boys, but not in girls. Overweight participants reported more frequent weight teasing experiences and less self-efficacy than participants of normal weight (all p < 0.001), but there was no difference in physical activity (p > 0.05).There were large correlations between self-efficacy and physical activity (r = 0.614, p < 0.01), and medium correlations for male sex and physical activity (r = 0.298, p < 0.01). Weight teasing and self-efficacy were negatively correlated (r = -0.190, p < 0.05). These results suggest that self-efficacy and an encouraging social context are beneficial to physical activity while weight teasing experiences are detrimental. Interventions against weight teasing in youth are needed. Copyright © 2010 S. Karger AG, Basel.
Enhancer Sharing Promotes Neighborhoods of Transcriptional Regulation Across Eukaryotes
Quintero-Cadena, Porfirio; Sternberg, Paul W.
2016-01-01
Enhancers physically interact with transcriptional promoters, looping over distances that can span multiple regulatory elements. Given that enhancer–promoter (EP) interactions generally occur via common protein complexes, it is unclear whether EP pairing is predominantly deterministic or proximity guided. Here, we present cross-organismic evidence suggesting that most EP pairs are compatible, largely determined by physical proximity rather than specific interactions. By reanalyzing transcriptome datasets, we find that the transcription of gene neighbors is correlated over distances that scale with genome size. We experimentally show that nonspecific EP interactions can explain such correlation, and that EP distance acts as a scaling factor for the transcriptional influence of an enhancer. We propose that enhancer sharing is commonplace among eukaryotes, and that EP distance is an important layer of information in gene regulation. PMID:27799341
Bridging the Gap Between the iLEAPS and GEWEX Land-Surface Modeling Communities
NASA Technical Reports Server (NTRS)
Bonan, Gordon; Santanello, Joseph A., Jr.
2013-01-01
Models of Earth's weather and climate require fluxes of momentum, energy, and moisture across the land-atmosphere interface to solve the equations of atmospheric physics and dynamics. Just as atmospheric models can, and do, differ between weather and climate applications, mostly related to issues of scale, resolved or parameterised physics,and computational requirements, so too can the land models that provide the required surface fluxes differ between weather and climate models. Here, however, the issue is less one of scale-dependent parameterisations.Computational demands can influence other minor land model differences, especially with respect to initialisation, data assimilation, and forecast skill. However, the distinction among land models (and their development and application) is largely driven by the different science and research needs of the weather and climate communities.
Maestro: an orchestration framework for large-scale WSN simulations.
Riliskis, Laurynas; Osipov, Evgeny
2014-03-18
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.
Maestro: An Orchestration Framework for Large-Scale WSN Simulations
Riliskis, Laurynas; Osipov, Evgeny
2014-01-01
Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123
Data management strategies for multinational large-scale systems biology projects.
Wruck, Wasco; Peuker, Martin; Regenbrecht, Christian R A
2014-01-01
Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don't Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.
NASA Astrophysics Data System (ADS)
Beichner, Robert
2015-03-01
The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).
Data management strategies for multinational large-scale systems biology projects
Peuker, Martin; Regenbrecht, Christian R.A.
2014-01-01
Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don’t Embrace Open Access. The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects. PMID:23047157
Large-scale semidefinite programming for many-electron quantum mechanics.
Mazziotti, David A
2011-02-25
The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society
Tomographic Imaging of the Suns Interior
NASA Technical Reports Server (NTRS)
Kosovichev, A. G.
1996-01-01
A new method is presented of determining the three-dimensional sound-speed structure and flow velocities in the solar convection zone by inversion of the acoustic travel-time data recently obtained by Duvall and coworkers. The initial inversion results reveal large-scale subsurface structures and flows related to the active regions, and are important for understanding the physics of solar activity and large-scale convection. The results provide evidence of a zonal structure below the surface in the low-latitude area of the magnetic activity. Strong converging downflows, up to 1.2 km/s, and a substantial excess of the sound speed are found beneath growing active regions. In a decaying active region, there is evidence for the lower than average sound speed and for upwelling of plasma.
Magnetic storm generation by large-scale complex structure Sheath/ICME
NASA Astrophysics Data System (ADS)
Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.
2017-12-01
We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis, J. Geophys. Res. Space Physics, 120, doi:10.1002/2015JA021274 5. Yermolaev Y. I., I. G. Lodkina, N. S. Nikolaeva, M. Y. Yermolaev, M. O. Riazantseva (2017), Some Problems of Identification of Large-Scale Solar Wind types and Their Role in the Physics of the Magnetosphere, Cosmic Res., 55(3), pp. 178-189. DOI: 10.1134/S0010952517030029
Ganchoon, Filipinas; Bugho, Rommel; Calina, Liezel; Dy, Rochelle; Gosney, James
2017-06-09
Physiatrists have provided humanitarian assistance in recent large-scale global natural disasters. Super Typhoon Haiyan, the deadliest and most costly typhoon in modern Philippine history, made landfall on 8 November 2013 resulting in significant humanitarian needs. Philippine Academy of Rehabilitation Medicine physiatrists conducted a project of 23 emergency basic relief and medical aid missions in response to Super Typhoon Haiyan from November 2013 to February 2014. The final mission was a medical aid mission to the inland rural community of Burauen, Leyte. Summary data were collected, collated, and tabulated; project and mission evaluation was performed. During the humanitarian assistance project, 31,254 basic relief kits containing a variety of food and non-food items were distributed and medical services including consultation, treatment, and medicines were provided to 7255 patients. Of the 344 conditions evaluated in the medical aid mission to Burauen, Leyte 85 (59%) were physical and rehabilitation medicine conditions comprised of musculoskeletal (62 [73%]), neurological (17 [20%]), and dermatological (6 [7%]) diagnoses. Post-mission and project analysis resulted in recommendations and programmatic changes to strengthen response in future disasters. Physiatrists functioned as medical providers, mission team leaders, community advocates, and in other roles. This physiatrist-led humanitarian assistance project met critical basic relief and medical aid needs of persons impacted by Super Typhoon Haiyan, demonstrating significant roles performed by physiatrists in response to a large-scale natural disaster. Resulting disaster programing changes and recommendations may inform a more effective response by PARM mission teams in the Philippines as well as by other South-Eastern Asia teams comprising rehabilitation professionals to large-scale, regional natural disasters. Implications for rehabilitation Large-scale natural disasters including tropical cyclones can have a catastrophic impact on the affected population. In response to Super Typhoon Haiyan, physiatrists representing the Philippine Academy of Rehabilitation Medicine conducted a project of 23 emergency basic relief and medical aid missions from November 2013 to February 2014. Project analysis indicates that medical mission teams responding in similar settings may expect to evaluate a significant number of physical medicine and rehabilitation conditions. Medical rehabilitation with participation by rehabilitation professionals including rehabilitation doctors is essential to the emergency medical response in large-scale natural disasters.
Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.
2015-01-01
Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.
Theory based scaling of edge turbulence and implications for the scrape-off layer width
NASA Astrophysics Data System (ADS)
Myra, J. R.; Russell, D. A.; Zweben, S. J.
2016-11-01
Turbulence and plasma parameter data from the National Spherical Torus Experiment (NSTX) [Ono et al., Nucl. Fusion 40, 557 (2000)] is examined and interpreted based on various theoretical estimates. In particular, quantities of interest for assessing the role of turbulent transport on the midplane scrape-off layer heat flux width are assessed. Because most turbulence quantities exhibit large scatter and little scaling within a given operation mode, this paper focuses on length and time scales and dimensionless parameters between operational modes including Ohmic, low (L), and high (H) modes using a large NSTX edge turbulence database [Zweben et al., Nucl. Fusion 55, 093035 (2015)]. These are compared with theoretical estimates for drift and interchange rates, profile modification saturation levels, a resistive ballooning condition, and dimensionless parameters characterizing L and H mode conditions. It is argued that the underlying instability physics governing edge turbulence in different operational modes is, in fact, similar, and is consistent with curvature-driven drift ballooning. Saturation physics, however, is dependent on the operational mode. Five dimensionless parameters for drift-interchange turbulence are obtained and employed to assess the importance of turbulence in setting the scrape-off layer heat flux width λq and its scaling. An explicit proportionality of the width λq to the safety factor and major radius (qR) is obtained under these conditions. Quantitative estimates and reduced model numerical simulations suggest that the turbulence mechanism is not negligible in determining λq in NSTX, at least for high plasma current discharges.