Considering Time-Scale Requirements for the Future
2013-05-01
geocentric reference frame with the SI second realized on the rotating geoid as the scale unit. It is a continuous atomic time scale that was...the B8lycentric and Geocentric Celestial Reference Systems, two time scales, Barycentric Coor- dinate Time (TCB) and Geocentric Coordinate Time (TCG...defined in 2006 as a linear scaling of TCB having the approximate rate of TT. TCG is the time coordinate for the four dimensional geocentric coordinate
Role of time in symbiotic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawala, A.K.
1996-12-31
All systems have a dynamics which reflects the changes in the system in time and, therefore, have to maintain a notion of time, either explicitly or implicitly. Traditionally, the notion of time in constructed systems has been implicitly specified at design time through rigid structures such as sampled data systems which operate with a fixed time tick, feedback systems which are designed reflecting a fixed time scale for the dynamics of the system as well as the controller responses, etc. In biological systems, the sense of time is a key element but it is not rigidly structured, even though allmore » such systems have a clear notion of time. We define the notion of time in systems in terms of temporal locality, time scale and time horizon. Temporal locality gives the notion of the accuracy with which the system knows about the current time. Time scale reflects the scale indicating the smallest and the largest granularity considered. It also reflects the reaction time. The time horizon indicates the time beyond which the system considers to be distant future and may not take it into account in its actions. Note that the temporal locality, time scale and the time horizon may be different for different types of actions of a system, thereby permitting the system to use multiple notions of time concurrently. In multi agent systems each subsystem may have its own notion of time but when intentions take place a coordination is necessary. Such coordination requires that the notions of time for different agents of the system be consistent. Clearly, the consistency requirement in this case does not mean exactly identical but implies that different agents can coordinate their actions which must take place in time. When the actions only require a determinate ordering the required coordination is much less severe than the case requiring actions to take place at the same time.« less
Satellite orbit and data sampling requirements
NASA Technical Reports Server (NTRS)
Rossow, William
1993-01-01
Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
2017-11-01
magnitude, intensity, and seasonality of climate. For infrastructure projects, relevant design life often exceeds 30 years—a period of time of...uncertainty about future statistical properties of climate at time and spatial scales required for planning and design purposes. Information...about future statistical properties of climate at time and spatial scales required for planning and design , and for assessing future operational
Divisions of geologic time-major chronostratigraphic and geochronologic units
,
2010-01-01
Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.
Performance limitations of bilateral force reflection imposed by operator dynamic characteristics
NASA Technical Reports Server (NTRS)
Chapel, Jim D.
1989-01-01
A linearized, single-axis model is presented for bilateral force reflection which facilitates investigation into the effects of manipulator, operator, and task dynamics, as well as time delay and gain scaling. Structural similarities are noted between this model and impedance control. Stability results based upon this model impose requirements upon operator dynamic characteristics as functions of system time delay and environmental stiffness. An experimental characterization reveals the limited capabilities of the human operator to meet these requirements. A procedure is presented for determining the force reflection gain scaling required to provide stability and acceptable operator workload. This procedure is applied to a system with dynamics typical of a space manipulator, and the required gain scaling is presented as a function of environmental stiffness.
Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean
NASA Astrophysics Data System (ADS)
Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.
2018-02-01
The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.
Mesoscale Models of Fluid Dynamics
NASA Astrophysics Data System (ADS)
Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.
During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.
NASA Technical Reports Server (NTRS)
Schubert, Siegfried
2011-01-01
Drought is fundamentally the result of an extended period of reduced precipitation lasting anywhere from a few weeks to decades and even longer. As such, addressing drought predictability and prediction in a changing climate requires foremost that we make progress on the ability to predict precipitation anomalies on subseasonal and longer time scales. From the perspective of the users of drought forecasts and information, drought is however most directly viewed through its impacts (e.g., on soil moisture, streamflow, crop yields). As such, the question of the predictability of drought must extend to those quantities as well. In order to make progress on these issues, the WCRP drought information group (DIG), with the support of WCRP, the Catalan Institute of Climate Sciences, the La Caixa Foundation, the National Aeronautics and Space Administration, the National Oceanic and Atmospheric Administration, and the National Science Foundation, has organized a workshop to focus on: 1. User requirements for drought prediction information on sub-seasonal to centennial time scales 2. Current understanding of the mechanisms and predictability of drought on sub-seasonal to centennial time scales 3. Current drought prediction/projection capabilities on sub-seasonal to centennial time scales 4. Advancing regional drought prediction capabilities for variables and scales most relevant to user needs on sub-seasonal to centennial time scales. This introductory talk provides an overview of these goals, and outlines the occurrence and mechanisms of drought world-wide.
Revised Kuppuswamy's Socioeconomic Status Scale: Explained and Updated.
Sharma, Rahul
2017-10-15
Some of the facets of the Kuppuswamy's socioeconomic status scale sometimes create confusion and require explanation on how to classify, and need some minor updates to bring the scale up-to-date. This article provides a revised scale that allows for the real-time update of the scale.
Revised Kuppuswamy's Socioeconomic Status Scale: Explained and Updated.
Sharma, Rahul
2017-08-26
Some of the facets of the Kuppuswamy's socioeconomic status scale sometimes create confusion and require explanation on how to classify, and need some minor updates to bring the scale up-to-date. This article provides a revised scale that allows for the real-time update of the scale.
Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Once upon a (slow) time in the land of recurrent neuronal networks….
Huang, Chengcheng; Doiron, Brent
2017-10-01
The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences. Copyright © 2017 Elsevier Ltd. All rights reserved.
Micro-Macro Coupling in Plasma Self-Organization Processes during Island Coalescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Weigang; Lapenta, Giovanni; Centrum voor Plasma-Astrofysica, Departement Wiskunde, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Leuven
The collisionless island coalescence process is studied with particle-in-cell simulations, as an internal-driven magnetic self-organization scenario. The macroscopic relaxation time, corresponding to the total time required for the coalescence to complete, is found to depend crucially on the scale of the system. For small-scale systems, where the macroscopic scales and the dissipation scales are more tightly coupled, the relaxation time is independent of the strength of the internal driving force: the small-scale processes of magnetic reconnection adjust to the amount of the initial magnetic flux to be reconnected, indicating that at the microscopic scales reconnection is enslaved by the macroscopicmore » drive. However, for large-scale systems, where the micro-macro scale separation is larger, the relaxation time becomes dependent on the driving force.« less
NASA Astrophysics Data System (ADS)
Fillingham, Sean P.; Cooper, Michael C.; Wheeler, Coral; Garrison-Kimmel, Shea; Boylan-Kolchin, Michael; Bullock, James S.
2015-12-01
The vast majority of dwarf satellites orbiting the Milky Way and M31 are quenched, while comparable galaxies in the field are gas rich and star forming. Assuming that this dichotomy is driven by environmental quenching, we use the Exploring the Local Volume in Simulations (ELVIS) suite of N-body simulations to constrain the characteristic time-scale upon which satellites must quench following infall into the virial volumes of their hosts. The high satellite quenched fraction observed in the Local Group demands an extremely short quenching time-scale (˜2 Gyr) for dwarf satellites in the mass range M⋆ ˜ 106-108 M⊙. This quenching time-scale is significantly shorter than that required to explain the quenched fraction of more massive satellites (˜8 Gyr), both in the Local Group and in more massive host haloes, suggesting a dramatic change in the dominant satellite quenching mechanism at M⋆ ≲ 108 M⊙. Combining our work with the results of complementary analyses in the literature, we conclude that the suppression of star formation in massive satellites (M⋆ ˜ 108-1011 M⊙) is broadly consistent with being driven by starvation, such that the satellite quenching time-scale corresponds to the cold gas depletion time. Below a critical stellar mass scale of ˜108 M⊙, however, the required quenching times are much shorter than the expected cold gas depletion times. Instead, quenching must act on a time-scale comparable to the dynamical time of the host halo. We posit that ram-pressure stripping can naturally explain this behaviour, with the critical mass (of M⋆ ˜ 108 M⊙) corresponding to haloes with gravitational restoring forces that are too weak to overcome the drag force encountered when moving through an extended, hot circumgalactic medium.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2012 CFR
2012-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Code of Federal Regulations, 2013 CFR
2013-10-01
... Voltage Variation A.3.4Short Time Power Reduction A.3.5Bursts A.3.6Electrostatic Discharge A.3... time of the test. 2.2.1.2Zero Load Tests. For zero load tests conducted in a laboratory or on a scale... other material weighed on the scale; and vi. The date and time the information is printed. b. For the...
Divisions of Geologic Time - Major Chronostratigraphic and Geochronologic Units
,
2007-01-01
Introduction Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and calibrated in years (Harland and others, 1982). Over the years, the development of new dating methods and refinement of previous ones have stimulated revisions to geologic time scales. Since the mid-1990s, geologists from the U.S. Geological Survey (USGS), State geological surveys, academia, and other organizations have sought a consistent time scale to be used in communicating ages of geologic units in the United States. Many international debates have occurred over names and boundaries of units, and various time scales have been used by the geoscience community.
50 CFR 680.23 - Equipment and operational requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (882 lb) of crab or an alternative material supplied by the scale manufacturer on the scale under test... bottom of the hopper unless an alternative testing method is approved by NMFS. The MPE for the daily at... delivery. The scale operator may write this information on the scale printout in ink at the time of landing...
NASA Astrophysics Data System (ADS)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
In recent years the applications of regional air quality models are continuously being extended to address atmospheric pollution phenomenon from local to hemispheric spatial scales over time scales ranging from episodic to annual. The need to represent interactions between physic...
Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems
NASA Astrophysics Data System (ADS)
Sikkandar Basha, Nazareen
The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.
Time and length scales within a fire and implications for numerical simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
TIESZEN,SHELDON R.
2000-02-02
A partial non-dimensionalization of the Navier-Stokes equations is used to obtain order of magnitude estimates of the rate-controlling transport processes in the reacting portion of a fire plume as a function of length scale. Over continuum length scales, buoyant times scales vary as the square root of the length scale; advection time scales vary as the length scale, and diffusion time scales vary as the square of the length scale. Due to the variation with length scale, each process is dominant over a given range. The relationship of buoyancy and baroclinc vorticity generation is highlighted. For numerical simulation, first principlesmore » solution for fire problems is not possible with foreseeable computational hardware in the near future. Filtered transport equations with subgrid modeling will be required as two to three decades of length scale are captured by solution of discretized conservation equations. By whatever filtering process one employs, one must have humble expectations for the accuracy obtainable by numerical simulation for practical fire problems that contain important multi-physics/multi-length-scale coupling with up to 10 orders of magnitude in length scale.« less
Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora
2016-02-05
Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less
Time scales of tunneling decay of a localized state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ban, Yue; Muga, J. G.; Sherman, E. Ya.
2010-12-15
Motivated by recent time-domain experiments on ultrafast atom ionization, we analyze the transients and time scales that characterize, aside from the relatively long lifetime, the decay of a localized state by tunneling. While the tunneling starts immediately, some time is required for the outgoing flux to develop. This short-term behavior depends strongly on the initial state. For the initial state, tightly localized so that the initial transients are dominated by over-the-barrier motion, the time scale for flux propagation through the barrier is close to the Buettiker-Landauer traversal time. Then a quasistationary, slow-decay process follows, which sets ideal conditions for observingmore » diffraction in time at longer times and distances. To define operationally a tunneling time at the barrier edge, we extrapolate backward the propagation of the wave packet that escaped from the potential. This extrapolated time is considerably longer than the time scale of the flux and density buildup at the barrier edge.« less
Trotter, Barbara; Conaway, Mark R; Burns, Suzanne M
2013-01-01
Findings of this study suggest the traditional sliding scale insulin (SSI) method does not improve target glucose values among adult medical inpatients. Timing of blood glucose (BC) measurement does affect the required SSI dose. BC measurement and insulin dose administration should be accomplished immediately prior to mealtime.
DATA QUALITY OBJECTIVES FOR SELECTING WASTE SAMPLES FOR BENCH-SCALE REFORMER TREATABILITY STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
BANNING DL
2011-02-11
This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required.more » The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.« less
Determination of the Time-Space Magnetic Correlation Functions in the Solar Wind
NASA Astrophysics Data System (ADS)
Weygand, J. M.; Matthaeus, W. H.; Kivelson, M.; Dasso, S.
2013-12-01
Magnetic field data from many different intervals and 7 different solar wind spacecraft are employed to estimate the scale-dependent time decorrelation function in the interplanetary magnetic field in both the slow and fast solar wind. This estimation requires correlations varying with both space and time lags. The two point correlation function with no time lag is determined by correlating time series data from multiple spacecraft separated in space and for complete coverage of length scales relies on many intervals with different spacecraft spatial separations. In addition we employ single spacecraft time-lagged correlations, and two spacecraft time lagged correlations to access different spatial and temporal correlation data. Combining these data sets gives estimates of the scale-dependent time decorrelation function, which in principle tells us how rapidly time decorrelation occurs at a given wavelength. For static fields the scale-dependent time decorrelation function is trivially unity, but in turbulence the nonlinear cascade process induces time-decorrelation at a given length scale that occurs more rapidly with decreasing scale. The scale-dependent time decorrelation function is valuable input to theories as well as various applications such as scattering, transport, and study of predictability. It is also a fundamental element of formal turbulence theory. Our results are extension of the Eulerian correlation functions estimated in Matthaeus et al. [2010], Weygand et al [2012; 2013].
Stockdale, Janine; Sinclair, Marlene; Kernohan, George; McCrum-Gardner, Evie; Keller, John
2013-01-01
Breastfeeding has immense public health value for mothers, babies, and society. But there is an undesirably large gap between the number of new mothers who undertake and persist in breastfeeding compared to what would be a preferred level of accomplishment. This gap is a reflection of the many obstacles, both physical and psychological, that confront new mothers. Previous research has illuminated many of these concerns, but research on this problem is limited in part by the unavailability of a research instrument that can measure the key differences between first-time mothers and experienced mothers, with regard to the challenges they face when breastfeeding and the instructional advice they require. An instrument was designed to measure motivational complexity associated with sustained breast feeding behaviour; the Breastfeeding Motivational Measurement Scale. It contains 51 self-report items (7 point Likert scale) that cluster into four categories related to perceived value of breast-feeding, confidence to succeed, factors that influence success or failure, and strength of intentions, or goal. However, this scale has not been validated in terms of its sensitivity to profile the motivation of new mothers and experienced mothers. This issue was investigated by having 202 breastfeeding mothers (100 first time mothers) fill out the scale. The analysis reported in this paper is a three factor solution consisting of value, midwife support, and expectancies for success that explained the characteristics of first time mothers as a known group. These results support the validity of the BMM scale as a diagnostic tool for research on first time mothers who are learning to breastfeed. Further research studies are required to further test the validity of the scale in additional subgroups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grest, Gary S.
2017-09-01
Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less
Entangled Dynamics in Macroscopic Quantum Tunneling of Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Alcala, Diego A.; Glick, Joseph A.; Carr, Lincoln D.
2017-05-01
Tunneling of a quasibound state is a nonsmooth process in the entangled many-body case. Using time-evolving block decimation, we show that repulsive (attractive) interactions speed up (slow down) tunneling. While the escape time scales exponentially with small interactions, the maximization time of the von Neumann entanglement entropy between the remaining quasibound and escaped atoms scales quadratically. Stronger interactions require higher-order corrections. Entanglement entropy is maximized when about half the atoms have escaped.
SRNL PARTICIPATION IN THE MULTI-SCALE ENSEMBLE EXERCISES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R
2007-10-29
Consequence assessment during emergency response often requires atmospheric transport and dispersion modeling to guide decision making. A statistical analysis of the ensemble of results from several models is a useful way of estimating the uncertainty for a given forecast. ENSEMBLE is a European Union program that utilizes an internet-based system to ingest transport results from numerous modeling agencies. A recent set of exercises required output on three distinct spatial and temporal scales. The Savannah River National Laboratory (SRNL) uses a regional prognostic model nested within a larger-scale synoptic model to generate the meteorological conditions which are in turn used inmore » a Lagrangian particle dispersion model. A discussion of SRNL participation in these exercises is given, with particular emphasis on requirements for provision of results in a timely manner with regard to the various spatial scales.« less
Improved Flux Formulations for Unsteady Low Mach Number Flows
2012-06-01
it requires the resolution of disparate time scales. Unsteady effects may arise from a combination of hydrodynamic effects in which pressure...including rotorcraft flows, jets and shear layers include a combination of both acoustic and hydrodynamic effects. Furthermore these effects may be...preconditioning parameter used for time scaling also affects the dissipation for the spatial flux, hydrodynamic unsteady effects (such as vortex propagation
Hot-bench simulation of the active flexible wing wind-tunnel model
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Houck, Jacob A.
1990-01-01
Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.
Dahlberg, Jerry; Tkacik, Peter T; Mullany, Brigid; Fleischhauer, Eric; Shahinian, Hossein; Azimi, Farzad; Navare, Jayesh; Owen, Spencer; Bisel, Tucker; Martin, Tony; Sholar, Jodie; Keanini, Russell G
2017-12-04
An analog, macroscopic method for studying molecular-scale hydrodynamic processes in dense gases and liquids is described. The technique applies a standard fluid dynamic diagnostic, particle image velocimetry (PIV), to measure: i) velocities of individual particles (grains), extant on short, grain-collision time-scales, ii) velocities of systems of particles, on both short collision-time- and long, continuum-flow-time-scales, iii) collective hydrodynamic modes known to exist in dense molecular fluids, and iv) short- and long-time-scale velocity autocorrelation functions, central to understanding particle-scale dynamics in strongly interacting, dense fluid systems. The basic system is composed of an imaging system, light source, vibrational sensors, vibrational system with a known media, and PIV and analysis software. Required experimental measurements and an outline of the theoretical tools needed when using the analog technique to study molecular-scale hydrodynamic processes are highlighted. The proposed technique provides a relatively straightforward alternative to photonic and neutron beam scattering methods traditionally used in molecular hydrodynamic studies.
NASA Astrophysics Data System (ADS)
Hernández Forero, Liz Catherine; Bahamón Cortés, Nelson
2017-06-01
Around the world, there are different providers of timestamp (mobile, radio or television operators, satellites of the GPS network, astronomical measurements, etc.), however, the source of the legal time for a country is either the national metrology institute or another designated laboratory. This activity requires a time standard based on an atomic time scale. The International Bureau of Weights and Measures (BIPM) calculates a weighted average of the time kept in more than 60 nations and produces a single international time scale, called Coordinated Universal Time (UTC). This article presents the current time scale that generates Legal Time for the Republic of Colombia produced by the Instituto Nacional de Metrología (INM) using the time and frequency national standard, a cesium atomic oscillator. It also illustrates how important it is for the academic, scientific and industrial communities, as well as the general public, to be synchronized with this time scale, which is traceable to the International System (SI) of units, through international comparisons that are made in real time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikhonenkov, I.; Vardi, A.; Moore, M. G.
2011-06-15
Mach-Zehnder atom interferometry requires hold-time phase squeezing to attain readout accuracy below the standard quantum limit. This increases its sensitivity to phase diffusion, restoring shot-noise scaling of the optimal signal-to-noise ratio in the presence of interactions. The contradiction between the preparations required for readout accuracy and robustness to interactions is removed by monitoring Rabi-Josephson oscillations instead of relative-phase oscillations during signal acquisition. Optimizing the signal-to-noise ratio with a Gaussian squeezed input, we find that hold-time number squeezing satisfies both demands and that sub-shot-noise scaling is retained even for strong interactions.
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
2018-05-01
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
Challenges to Progress in Studies of Climate-Tectonic-Erosion Interactions
NASA Astrophysics Data System (ADS)
Burbank, D. W.
2016-12-01
Attempts to unravel the relative importance of climate and tectonics in modulating topography and erosion should compare relevant data sets at comparable temporal and spatial scales. Given that such data are uncommonly available, how can we compare diverse data sets in a robust fashion? Many erosion-rate studies rely on detrital cosmogenic nuclides. What time scales can such data address, and what landscape conditions do they require to provide accurate representations of long-term erosion rates? To what extent do large-scale, but infrequent erosional events impact long-term rates? Commonly, long-term erosion rates are deduced from thermochronologic data. What types of data are needed to test for consistency of rates across a given interval or change in rates through time? Similarly, spatial and temporal variability in precipitation or tectonics requires averaging across appropriate scales. How are such data obtained in deforming mountain belts, and how do we assess their reliability? This study describes the character and temporal duration of key variables that are needed to examine climate-tectonic-erosion interactions, explores the strengths and weaknesses of several study areas, and suggests the types of data requirements that will underpin enlightening "tests" of hypotheses related to the mutual impacts of climate, tectonics, and erosion.
Temporal Characterization of Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Grosveld, Ferdinand W.; Sullivan, Brenda M.; Rizzi, Stephen A.
2004-01-01
Current aircraft source noise prediction tools yield time-independent frequency spectra as functions of directivity angle. Realistic evaluation and human assessment of aircraft fly-over noise require the temporal characteristics of the noise signature. The purpose of the current study is to analyze empirical data from broadband jet and tonal fan noise sources and to provide the temporal information required for prediction-based synthesis. Noise sources included a one-tenth-scale engine exhaust nozzle and a one-fifth scale scale turbofan engine. A methodology was developed to characterize the low frequency fluctuations employing the Short Time Fourier Transform in a MATLAB computing environment. It was shown that a trade-off is necessary between frequency and time resolution in the acoustic spectrogram. The procedure requires careful evaluation and selection of the data analysis parameters, including the data sampling frequency, Fourier Transform window size, associated time period and frequency resolution, and time period window overlap. Low frequency fluctuations were applied to the synthesis of broadband noise with the resulting records sounding virtually indistinguishable from the measured data in initial subjective evaluations. Amplitude fluctuations of blade passage frequency (BPF) harmonics were successfully characterized for conditions equivalent to take-off and approach. Data demonstrated that the fifth harmonic of the BPF varied more in frequency than the BPF itself and exhibited larger amplitude fluctuations over the duration of the time record. Frequency fluctuations were found to be not perceptible in the current characterization of tonal components.
Fire extinguishing agents for oxygen-enriched atmospheres
NASA Astrophysics Data System (ADS)
Plugge, M. A.; Wilson, C. W.; Zallen, D. M.; Walker, J. L.
1985-12-01
Fire-suppression agent requirements for extinguishing fires in oxygen-enriched atmospheres were determined employing small-, medium-, large-, and full-scale test apparatuses. The small- and medium-scale tests showed that a doubling of the oxygen concentration required five times more HALON for extinguishment. For fires of similar size and intensity, the effect of oxygen enrichment of the diluent volume in the HC-131A was not as grate as in the smaller compartments of the B-52 which presented a higher damage scenario. The full-scale tests showed that damage to the airframe was as important a factor in extinguishment as oxygen enrichment.
Large Composite Structures Processing Technologies for Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.
2001-01-01
Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.
Chip Scale Ultra-Stable Clocks: Miniaturized Phonon Trap Timing Units for PNT of CubeSats
NASA Technical Reports Server (NTRS)
Rais-Zadeh, Mina; Altunc, Serhat; Hunter, Roger C.; Petro, Andrew
2016-01-01
The Chip Scale Ultra-Stable Clocks (CSUSC) project aims to provide a superior alternative to current solutions for low size, weight, and power timing devices. Currently available quartz-based clocks have problems adjusting to the high temperature and extreme acceleration found in space applications, especially when scaled down to match small spacecraft size, weight, and power requirements. The CSUSC project aims to utilize dual-mode resonators on an ovenized platform to achieve the exceptional temperature stability required for these systems. The dual-mode architecture utilizes a temperature sensitive and temperature stable mode simultaneously driven on the same device volume to eliminate ovenization error while maintaining extremely high performance. Using this technology it is possible to achieve parts-per-billion (ppb) levels of temperature stability with multiple orders of magnitude smaller size, weight, and power.
Aligning Scales of Certification Tests. Research Report. ETS RR-10-07
ERIC Educational Resources Information Center
Dorans, Neil J.; Liang, Longjuan; Puhan, Gautam
2010-01-01
Scores are the most visible and widely used products of a testing program. The choice of score scale has implications for test specifications, equating, and test reliability and validity, as well as for test interpretation. At the same time, the score scale should be viewed as infrastructure likely to require repair at some point. In this report…
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas
2018-02-01
There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.
Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.
Naghiloo, M; Jordan, A N; Murch, K W
2017-11-03
Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.
Data driven weed management: Tracking herbicide resistance at the landscape scale
USDA-ARS?s Scientific Manuscript database
Limiting the prevalence of herbicide resistant (HR) weeds requires consistent management implementation across space and time. Although weed population dynamics operate at scales above farm-level, the emergent effect of neighboring management decisions on in-field weed densities and the spread of re...
Review of subjective measures of human response to aircraft noise
NASA Technical Reports Server (NTRS)
Cawthorn, J. M.; Mayes, W. H.
1976-01-01
The development of aircraft noise rating scales and indexes is reviewed up to the present time. Single event scales, multiple event indexes, and their interrelation with each other, are considered. Research requirements for further refinement and development of aircraft noise rating quantification factors are discussed.
Estimating time-dependent connectivity in marine systems
Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo
2016-01-01
Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.
Scaling Analysis of Alloy Solidification and Fluid Flow in a Rectangular Cavity
NASA Astrophysics Data System (ADS)
Plotkowski, A.; Fezi, K.; Krane, M. J. M.
A scaling analysis was performed to predict trends in alloy solidification in a side-cooled rectangular cavity. The governing equations for energy and momentum were scaled in order to determine the dependence of various aspects of solidification on the process parameters for a uniform initial temperature and an isothermal boundary condition. This work improved on previous analyses by adding considerations for the cooling bulk fluid flow. The analysis predicted the time required to extinguish the superheat, the maximum local solidification time, and the total solidification time. The results were compared to a numerical simulation for a Al-4.5 wt.% Cu alloy with various initial and boundary conditions. Good agreement was found between the simulation results and the trends predicted by the scaling analysis.
The RATIO method for time-resolved Laue crystallography
Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334
Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Matthaeus, William H.
1990-01-01
A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
Lavania, Sagar; Praharaj, Samir Kumar; Bains, Hariender Singh; Sinha, Vishal; Kumar, Abhinav
2016-01-01
Injectable antipsychotics are frequently required for controlling agitation and aggression in acute psychosis. No study has examined the use of injectable levosulpiride for this indication. To compare the efficacy and safety of injectable levosulpiride and haloperidol in patients with acute psychosis. This was a randomized, double-blind, parallel-group study in which 60 drug-naive patients having acute psychosis were randomly assigned to receive either intramuscular haloperidol (10-20 mg/d) or levosulpiride (25-50 mg/d) for 5 days. All patients were rated on Brief Psychiatric Rating Scale (BPRS), Overt Agitation Severity Scale (OASS), Overt Aggression Scale-Modified (OAS-M) scores, Simpson Angus Scale (SAS), and Barnes Akathisia Rating Scale (BARS). Repeated-measures ANOVA for BPRS scores showed significant effect of time (P < 0.001) and a trend toward greater reduction in scores in haloperidol group as shown by group × time interaction (P = 0.076). Repeated-measures ANOVA for OASS showed significant effect of time (P < 0.001) but no group × time interaction. Repeated-measures ANOVA for OAS-M scores showed significant effect of time (P < 0.001) and greater reduction in scores in haloperidol group as shown by group × time interaction (P = 0.032). Lorazepam requirement was much lower in haloperidol group as compared with those receiving levosulpiride (P = 0.022). Higher rates of akathisia and extrapyramidal symptoms were noted in the haloperidol group. Haloperidol was more effective than levosulpiride injection for psychotic symptoms, aggression, and severity of agitation in acute psychosis, but extrapyramidal adverse effects were less frequent with levosulpiride as compared with those receiving haloperidol.
Practical Formulations of the Latent Growth Item Response Model
ERIC Educational Resources Information Center
McGuire, Leah Walker
2010-01-01
Growth modeling using longitudinal data seems to be a promising direction for improving the methodology associated with the accountability movement. Longitudinal modeling requires that the measurements of ability are comparable over time and on the same scale. One way to create the vertical scale is through concurrent estimation with…
Deriving spatial trends of air pollution at a neighborhood-scale through mobile monitoring
Abstract: Measuring air pollution in real-time using an instrumented vehicle platform has been an emerging strategy to resolve air pollution trends at a very fine spatial scale (10s of meters). Achieving second-by-second data representative of urban air quality trends requires a...
Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images
USDA-ARS?s Scientific Manuscript database
Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
[A new scale for measuring return-to-work motivation of mentally ill employees].
Poersch, M
2007-03-01
A new scale "motivation for return to work" has been constructed to measure depressive patients' motivation to start working again in a stepwise process. The scale showed in 46 patients of a first case management (CM) sample with depressive employees a good correlation with the final social status of the CM. Only the motivated patients were successful returning to work and could be, separated clearly from the most demotivated one. Second, the scale correlated with the duration of sick leave and third showed an inverse correlation with the complete time of CM, suggesting that a successful stepwise return to work requires time. These first results need further examination.
NASA Astrophysics Data System (ADS)
Cozzarelli, I. M.; Esaid, H. I.; Bekins, B. A.; Eganhouse, R. P.; Baedecker, M.
2002-05-01
Assessment of natural attenuation as a remedial option requires understanding the long-term fate of contaminant compounds. The development of correct conceptual models of biodegradation requires observations at spatial and temporal scales appropriate for the reactions being measured. For example, the availability of electron acceptors such as solid-phase iron oxides may vary at the cm scale due to aquifer heterogeneities. Characterizing the distribution of these oxides may require small-scale measurements over time scales of tens of years in order to assess their impact on the fate of contaminants. The long-term study of natural attenuation of hydrocarbons in a contaminant plume near Bemidji, MN provides insight into how natural attenuation of hydrocarbons evolves over time. The sandy glacial-outwash aquifer at this USGS Toxic Substances Hydrology research site was contaminated by crude oil in 1979. During the 16 years that data have been collected the shape and extent of the contaminant plume changed as redox reactions, most notably iron reduction, progressed over time. Investigation of the controlling microbial reactions in this system required a systematic and multi-scaled approach. Early indications of plume shrinkage were observed over a time scale of a few years, based on observation well data. These changes were associated with iron reduction near the crude-oil source. The depletion of Fe (III) oxides near the contaminant source caused the dissolved iron concentrations to increase and spread downgradient at a rate of approximately 3 m/year. The zone of maximum benzene, toluene, ethylbenzene, and xylene (BTEX) concentrations has also spread within the anoxic plume. Subsequent analyses of sediment and water, collected at small-scale cm intervals from cores in the contaminant plume, provided insight into the evolution of redox zones at smaller scales. Contaminants, such as ortho-xylene, that appeared to be contained near the oil source based on the larger-scale observation well data, were observed to be migrating in thin layers as the aquifer evolved to methanogenic conditions in narrow zones. The impact of adequately identifying the microbially mediated redox reactions was illustrated with a novel inverse modeling effort (using both the USGS solute transport and biodegradation code BIOMOC and the USGS universal inverse modeling code UCODE) to quantify field-scale hydrocarbon dissolution and biodegradation at the Bemidji site. Extensive historical data compiled at the Bemidji site were used, including 1352 concentration observations from 30 wells and 66 core sections. The simulations reproduced the general large-scale evolution of the plume, but the percent BTEX mass removed from the oil body after 18 years varied significantly, depending on which biodegradation conceptual model was used. The best fit was obtained for the iron-reduction conceptual model, which incorporated the finite availability of Fe (III) in the aquifer and reproduced the field observation that depletion of solid-phase iron resulted in increased downgradient transport of BTEX compounds. The predicted benzene plume 50 years after the spill showed significantly higher concentrations of benzene for the iron-reduction model compared to other conceptual models tested. This study demonstrates that the long-term sustainability of the electron acceptors is key to predicting the ultimate fate of the hydrocarbons. Assessing this evolution of redox processes and developing an adequate conceptual model required observations on multiple spatial scales over the course of many years.
Optimal approaches for balancing invasive species eradication and endangered species management.
Lampert, Adam; Hastings, Alan; Grosholz, Edwin D; Jardine, Sunny L; Sanchirico, James N
2014-05-30
Resolving conflicting ecosystem management goals-such as maintaining fisheries while conserving marine species or harvesting timber while preserving habitat-is a widely recognized challenge. Even more challenging may be conflicts between two conservation goals that are typically considered complementary. Here, we model a case where eradication of an invasive plant, hybrid Spartina, threatens the recovery of an endangered bird that uses Spartina for nesting. Achieving both goals requires restoration of native Spartina. We show that the optimal management entails less intensive treatment over longer time scales to fit with the time scale of natural processes. In contrast, both eradication and restoration, when considered separately, would optimally proceed as fast as possible. Thus, managers should simultaneously consider multiple, potentially conflicting goals, which may require flexibility in the timing of expenditures. Copyright © 2014, American Association for the Advancement of Science.
Population clocks: motor timing with neural dynamics
Buonomano, Dean V.; Laje, Rodrigo
2010-01-01
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing. PMID:20889368
Selected Papers on Low-Energy Antiprotons and Possible Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Robert
1998-09-19
The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.
Time scales for molecule formation by ion-molecule reactions
NASA Technical Reports Server (NTRS)
Langer, W. D.; Glassgold, A. E.
1976-01-01
Analytical solutions are obtained for nonlinear differential equations governing the time-dependence of molecular abundances in interstellar clouds. Three gas-phase reaction schemes are considered separately for the regions where each dominates. The particular case of CO, and closely related members of the Oh and CH families of molecules, is studied for given values of temperature, density, and the radiation field. Nonlinear effects and couplings with particular ions are found to be important. The time scales for CO formation range from 100,000 to a few million years, depending on the chemistry and regime. The time required for essentially complete conversion of C(+) to CO in the region where the H3(+) chemistry dominates is several million years. Because this time is longer than or comparable to dynamical time scales for dense interstellar clouds, steady-state abundances may not be observed in such clouds.
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Computer considerations for real time simulation of a generalized rotor model
NASA Technical Reports Server (NTRS)
Howe, R. M.; Fogarty, L. E.
1977-01-01
Scaled equations were developed to meet requirements for real time computer simulation of the rotor system research aircraft. These equations form the basis for consideration of both digital and hybrid mechanization for real time simulation. For all digital simulation estimates of the required speed in terms of equivalent operations per second are developed based on the complexity of the equations and the required intergration frame rates. For both conventional hybrid simulation and hybrid simulation using time-shared analog elements the amount of required equipment is estimated along with a consideration of the dynamic errors. Conventional hybrid mechanization using analog simulation of those rotor equations which involve rotor-spin frequencies (this consititutes the bulk of the equations) requires too much analog equipment. Hybrid simulation using time-sharing techniques for the analog elements appears possible with a reasonable amount of analog equipment. All-digital simulation with affordable general-purpose computers is not possible because of speed limitations, but specially configured digital computers do have the required speed and consitute the recommended approach.
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
Vandergoot, C.S.; Bur, M.T.; Powell, K.A.
2008-01-01
Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.
As a Matter of Force—Systematic Biases in Idealized Turbulence Simulations
NASA Astrophysics Data System (ADS)
Grete, Philipp; O’Shea, Brian W.; Beckwith, Kris
2018-05-01
Many astrophysical systems encompass very large dynamical ranges in space and time, which are not accessible by direct numerical simulations. Thus, idealized subvolumes are often used to study small-scale effects including the dynamics of turbulence. These turbulent boxes require an artificial driving in order to mimic energy injection from large-scale processes. In this Letter, we show and quantify how the autocorrelation time of the driving and its normalization systematically change the properties of an isothermal compressible magnetohydrodynamic flow in the sub- and supersonic regime and affect astrophysical observations such as Faraday rotation. For example, we find that δ-in-time forcing with a constant energy injection leads to a steeper slope in kinetic energy spectrum and less-efficient small-scale dynamo action. In general, we show that shorter autocorrelation times require more power in the acceleration field, which results in more power in compressive modes that weaken the anticorrelation between density and magnetic field strength. Thus, derived observables, such as the line-of-sight (LOS) magnetic field from rotation measures, are systematically biased by the driving mechanism. We argue that δ-in-time forcing is unrealistic and numerically unresolved, and conclude that special care needs to be taken in interpreting observational results based on the use of idealized simulations.
Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...
2017-02-28
Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
Scaling-up the minimum requirements analysis for big wilderness issues
David N. Cole
2007-01-01
The concept of applying a "minimum requirements" analysis to decisions about administrative actions in wilderness in the United States has been around for a long time. It comes from Section 4(c) of the Wilderness Act of 1964, which states that "except as necessary to meet minimum requirements for the administration of the area for the purposes of this...
Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations
NASA Astrophysics Data System (ADS)
Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean
2017-10-01
Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.
Learning Analytics Platform, towards an Open Scalable Streaming Solution for Education
ERIC Educational Resources Information Center
Lewkow, Nicholas; Zimmerman, Neil; Riedesel, Mark; Essa, Alfred
2015-01-01
Next generation digital learning environments require delivering "just-in-time feedback" to learners and those who support them. Unlike traditional business intelligence environments, streaming data requires resilient infrastructure that can move data at scale from heterogeneous data sources, process the data quickly for use across…
Molecular dynamics on diffusive time scales from the phase-field-crystal equation.
Chan, Pak Yuen; Goldenfeld, Nigel; Dantzig, Jon
2009-03-01
We extend the phase-field-crystal model to accommodate exact atomic configurations and vacancies by requiring the order parameter to be non-negative. The resulting theory dictates the number of atoms and describes the motion of each of them. By solving the dynamical equation of the model, which is a partial differential equation, we are essentially performing molecular dynamics simulations on diffusive time scales. To illustrate this approach, we calculate the two-point correlation function of a fluid.
Isosurface Extraction in Time-Varying Fields Using a Temporal Hierarchical Index Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Many high-performance isosurface extraction algorithms have been proposed in the past several years as a result of intensive research efforts. When applying these algorithms to large-scale time-varying fields, the storage overhead incurred from storing the search index often becomes overwhelming. this paper proposes an algorithm for locating isosurface cells in time-varying fields. We devise a new data structure, called Temporal Hierarchical Index Tree, which utilizes the temporal coherence that exists in a time-varying field and adoptively coalesces the cells' extreme values over time; the resulting extreme values are then used to create the isosurface cell search index. For a typical time-varying scalar data set, not only does this temporal hierarchical index tree require much less storage space, but also the amount of I/O required to access the indices from the disk at different time steps is substantially reduced. We illustrate the utility and speed of our algorithm with data from several large-scale time-varying CID simulations. Our algorithm can achieve more than 80% of disk-space savings when compared with the existing techniques, while the isosurface extraction time is nearly optimal.
The effects of magnetic fields on the growth of thermal instabilities in cooling flows
NASA Technical Reports Server (NTRS)
David, Laurence P.; Bregman, Joel N.
1989-01-01
The effects of heat conduction and magnetic fields on the growth of thermal instabilities in cooling flows are examined using a time-dependent hydrodynamics code. It is found that, for magnetic field strengths of roughly 1 micro-Gauss, magnetic pressure forces can completely suppress shocks from forming in thermally unstable entropy perturbations with initial length scales as large as 20 kpc, even for initial amplitudes as great as 60 percent. Perturbations with initial amplitudes of 50 percent and initial magnetic field strengths of 1 micro-Gauss cool to 10,000 K on a time scale which is only 22 percent of the initial instantaneous cooling time. Nonlinear perturbations can thus condense out of cooling flows on a time scale substantially less than the time required for linear perturbations and produce significant mass deposition of cold gas while the accreting intracluster gas is still at large radii.
D Reconstruction and Visualization of Cultural Heritage: Analyzing Our Legacy Through Time
NASA Astrophysics Data System (ADS)
Rodríguez-Gonzálvez, P.; Muñoz-Nieto, A. L.; del Pozo, S.; Sanchez-Aparicio, L. J.; Gonzalez-Aguilera, D.; Micoli, L.; Gonizzi Barsanti, S.; Guidi, G.; Mills, J.; Fieber, K.; Haynes, I.; Hejmanowska, B.
2017-02-01
Temporal analyses and multi-temporal 3D reconstruction are fundamental for the preservation and maintenance of all forms of Cultural Heritage (CH) and are the basis for decisions related to interventions and promotion. Introducing the fourth dimension of time into three-dimensional geometric modelling of real data allows the creation of a multi-temporal representation of a site. In this way, scholars from various disciplines (surveyors, geologists, archaeologists, architects, philologists, etc.) are provided with a new set of tools and working methods to support the study of the evolution of heritage sites, both to develop hypotheses about the past and to model likely future developments. The capacity to "see" the dynamic evolution of CH assets across different spatial scales (e.g. building, site, city or territory) compressed in diachronic model, affords the possibility to better understand the present status of CH according to its history. However, there are numerous challenges in order to carry out 4D modelling and the requisite multi-data source integration. It is necessary to identify the specifications, needs and requirements of the CH community to understand the required levels of 4D model information. In this way, it is possible to determine the optimum material and technologies to be utilised at different CH scales, as well as the data management and visualization requirements. This manuscript aims to provide a comprehensive approach for CH time-varying representations, analysis and visualization across different working scales and environments: rural landscape, urban landscape and architectural scales. Within this aim, the different available metric data sources are systemized and evaluated in terms of their suitability.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Scaling, Similarity, and the Fourth Paradigm for Hydrology
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross
2017-01-01
In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.
Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations.
Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F; Spoerke, Erik David; Pan, Wei; Zuo, Jian Min
2016-04-13
Atomic-scale phenomena fundamentally influence materials form and function that makes the ability to locally probe and study these processes critical to advancing our understanding and development of materials. Atomic-scale chemical imaging by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) is a powerful approach to investigate solid crystal structures. Inefficient X-ray emission and collection, however, require long acquisition times (typically hundreds of seconds), making the technique incompatible with electron-beam sensitive materials and study of dynamic material phenomena. Here we describe an atomic-scale STEM-EDS chemical imaging technique that decreases the acquisition time to as little as one second, a reduction of more than 100 times. We demonstrate this new approach using LaAlO3 single crystal and study dynamic phase transformation in beam-sensitive Li[Li0.2Ni0.2Mn0.6]O2 (LNMO) lithium ion battery cathode material. By capturing a series of time-lapsed chemical maps, we show for the first time clear atomic-scale evidence of preferred Ni-mobility in LNMO transformation, revealing new kinetic mechanisms. These examples highlight the potential of this approach toward temporal, atomic-scale mapping of crystal structure and chemistry for investigating dynamic material phenomena.
Comparative Analysis of the Relative Validity for Subjective Time Rating Scales. Final Report.
ERIC Educational Resources Information Center
Carpenter, James B.; And Others
Since the accuracy and validity of occupational data may vary according to the rating scale format employed, the first phase of the research described in the report employed hypothetical job descriptions from which accurate criterion data could be generated. The second phase of the research required developing an occupational survey instrument…
The MST radar technique: Requirements for operational weather forecasting
NASA Technical Reports Server (NTRS)
Larsen, M. F.
1983-01-01
There is a feeling that the accuracy of mesoscale forecasts for spatial scales of less than 1000 km and time scales of less than 12 hours can be improved significantly if resources are applied to the problem in an intensive effort over the next decade. Since the most dangerous and damaging types of weather occur at these scales, there are major advantages to be gained if such a program is successful. The interest in improving short term forecasting is evident. The technology at the present time is sufficiently developed, both in terms of new observing systems and the computing power to handle the observations, to warrant an intensive effort to improve stormscale forecasting. An assessment of the extent to which the so-called MST radar technique fulfills the requirements for an operational mesoscale observing network is reviewed and the extent to which improvements in various types of forecasting could be expected if such a network is put into operation are delineated.
Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate
NASA Astrophysics Data System (ADS)
Schertzer, Daniel; Lovejoy, Shaun
2013-04-01
The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.
NASA Astrophysics Data System (ADS)
Serra, Romain; Valette, Anne; Taji, Amine; Emsley, Stephen
2017-04-01
Building climate resilience (i.e. climate change adaptation or self-renew of ecosystems) or planning environment rehabilitations and nature-based solutions to address their vulnerabilities to disturbances has prerequisites: 1- identify the disorder, i.e. stresses caused by events such as hurricanes, tsunamis, heavy rains, hailstone falls, smog… or piled-up along-time such as warming, rainfalls, ocean acidification, soil salinization… and measured by trends; and 2- qualify its impact on the ecosystems, i.e. the resulting strains. Mitigation of threats is accordingly twofold, i. on locally temporal scales for protection, ii. on long scale for prevention and sustainability. For assessment and evaluation prior to design future scenarios, it requires concomitant acquisition of (a) climate data at global and local spatial scale which describe the changes at the various temporal scales of phenomena without signal aliasing, and of (b) the ecosystems' status at the scales of the forcing and of relaxation times, hysteresis lags, periodicities of orbits in chaotic systems, shifts from one attractor in ecosystems to the others, etc. Dissociating groups of timescales and spatial scales facilitates the analysis and help set-up monitoring schemes. The Sentinel-2 mission, with a revisit of the earth every few days and a 10m resolution on-ground is a good automatic spectro-analytical monitoring system because detecting changes in numerous optical & IR bands at proper spatial scales for the description of land parcels. Combined with photo-interpreted VHR data which describe the environment more crudely but with high precision of land parcels' border locations, it helps find the relationship between stress and strains to empirically understand the relationships. An example is provided for Tonga, courtesy of ESA support and ADB request, with a focus on time-series' consistency that requires radiometric and geometric normalisation of EO data sets. Methodologies have been developed in the frame of ESA programs and EC program (H2020 Co-Resyf).
A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory
NASA Astrophysics Data System (ADS)
Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin
2015-09-01
Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.
Scaling phenomena in fatigue and fracture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenblatt, G.I.
2004-12-01
The general classification of scaling laws will be presented and the basic concepts of modern similarity analysis--intermediate asymptotics, complete and incomplete similarity--will be introduced and discussed. The examples of scaling laws corresponding to complete similarity will be given. The Paris scaling law in fatigue will be discussed as an instructive example of incomplete similarity. It will be emphasized that in the Paris law the powers are not the material constants. Therefore, the evaluation of the life-time of structures using the data obtained from standard fatigue tests requires some precautions.
Dynamic ocean management increases the efficiency and efficacy of fisheries management.
Dunn, Daniel C; Maxwell, Sara M; Boustany, Andre M; Halpin, Patrick N
2016-01-19
In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. Here, we address the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism, and social aggregations). Furthermore, it can significantly improve the efficiency of management: as the resolution of the closures used increases (i.e., as the closures become more targeted), the percentage of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. In the scenario examined, coarser scale management measures (annual time-area closures and monthly full-fishery closures) would displace up to four to five times the target catch and require 100-200 times more square kilometer-days of closure than dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.
Crowe, A S; Booty, W G
1995-05-01
A multi-level pesticide assessment methodology has been developed to permit regulatory personnel to undertake a variety of assessments on the potential for pesticide used in agricultural areas to contaminate the groundwater regime at an increasingly detailed geographical scale of investigation. A multi-level approach accounts for a variety of assessment objectives and detail required in the assessment, the restrictions on the availability and accuracy of data, the time available to undertake the assessment, and the expertise of the decision maker. The level 1: regional scale is designed to prioritize districts having a potentially high risk for groundwater contamination from the application of a specific pesticide for a particular crop. The level 2: local scale is used to identify critical areas for groundwater contamination, at a soil polygon scale, within a district. A level 3: soil profile scale allows the user to evaluate specific factors influencing pesticide leaching and persistence, and to determine the extent and timing of leaching, through the simulation of the migration of a pesticide within a soil profile. Because of the scale of investigation, limited amount of data required, and qualitative nature of the assessment results, the level 1 and level 2 assessment are designed primarily for quick and broad guidance related to management practices. A level 3 assessment is more complex, requires considerably more data and expertise on the part of the user, and hence is designed to verify the potential for contamination identified during the level 1 or 2 assessment. The system combines environmental modelling, geographical information systems, extensive databases, data management systems, expert systems, and pesticide assessment models, to form an environmental information system for assessing the potential for pesticides to contaminate groundwater.
Report of the panel on earth rotation and reference frames, section 7
NASA Technical Reports Server (NTRS)
Dickey, Jean O.; Dickman, Steven R.; Eubanks, Marshall T.; Feissel, Martine; Herring, Thomas A.; Mueller, Ivan I.; Rosen, Richard D.; Schutz, Robert E.; Wahr, John M.; Wilson, Charles R.
1991-01-01
Objectives and requirements for Earth rotation and reference frame studies in the 1990s are discussed. The objectives are to observe and understand interactions of air and water with the rotational dynamics of the Earth, the effects of the Earth's crust and mantle on the dynamics and excitation of Earth rotation variations over time scales of hours to centuries, and the effects of the Earth's core on the rotational dynamics and the excitation of Earth rotation variations over time scales of a year or longer. Another objective is to establish, refine and maintain terrestrial and celestrial reference frames. Requirements include improvements in observations and analysis, improvements in celestial and terrestrial reference frames and reference frame connections, and improved observations of crustal motion and mass redistribution on the Earth.
Woodward, Carol S.; Gardner, David J.; Evans, Katherine J.
2015-01-01
Efficient solutions of global climate models require effectively handling disparate length and time scales. Implicit solution approaches allow time integration of the physical system with a step size governed by accuracy of the processes of interest rather than by stability of the fastest time scales present. Implicit approaches, however, require the solution of nonlinear systems within each time step. Usually, a Newton's method is applied to solve these systems. Each iteration of the Newton's method, in turn, requires the solution of a linear model of the nonlinear system. This model employs the Jacobian of the problem-defining nonlinear residual, but thismore » Jacobian can be costly to form. If a Krylov linear solver is used for the solution of the linear system, the action of the Jacobian matrix on a given vector is required. In the case of spectral element methods, the Jacobian is not calculated but only implemented through matrix-vector products. The matrix-vector multiply can also be approximated by a finite difference approximation which may introduce inaccuracy in the overall nonlinear solver. In this paper, we review the advantages and disadvantages of finite difference approximations of these matrix-vector products for climate dynamics within the spectral element shallow water dynamical core of the Community Atmosphere Model.« less
Manz, Stephanie; Casandruc, Albert; Zhang, Dongfang; Zhong, Yinpeng; Loch, Rolf A; Marx, Alexander; Hasegawa, Taisuke; Liu, Lai Chung; Bayesteh, Shima; Delsim-Hashemi, Hossein; Hoffmann, Matthias; Felber, Matthias; Hachmann, Max; Mayet, Frank; Hirscht, Julian; Keskin, Sercan; Hada, Masaki; Epp, Sascha W; Flöttmann, Klaus; Miller, R J Dwayne
2015-01-01
The long held objective of directly observing atomic motions during the defining moments of chemistry has been achieved based on ultrabright electron sources that have given rise to a new field of atomically resolved structural dynamics. This class of experiments requires not only simultaneous sub-atomic spatial resolution with temporal resolution on the 100 femtosecond time scale but also has brightness requirements approaching single shot atomic resolution conditions. The brightness condition is in recognition that chemistry leads generally to irreversible changes in structure during the experimental conditions and that the nanoscale thin samples needed for electron structural probes pose upper limits to the available sample or "film" for atomic movies. Even in the case of reversible systems, the degree of excitation and thermal effects require the brightest sources possible for a given space-time resolution to observe the structural changes above background. Further progress in the field, particularly to the study of biological systems and solution reaction chemistry, requires increased brightness and spatial coherence, as well as an ability to tune the electron scattering cross-section to meet sample constraints. The electron bunch density or intensity depends directly on the magnitude of the extraction field for photoemitted electron sources and electron energy distribution in the transverse and longitudinal planes of electron propagation. This work examines the fundamental limits to optimizing these parameters based on relativistic electron sources using re-bunching cavity concepts that are now capable of achieving 10 femtosecond time scale resolution to capture the fastest nuclear motions. This analysis is given for both diffraction and real space imaging of structural dynamics in which there are several orders of magnitude higher space-time resolution with diffraction methods. The first experimental results from the Relativistic Electron Gun for Atomic Exploration (REGAE) are given that show the significantly reduced multiple electron scattering problem in this regime, which opens up micron scale systems, notably solution phase chemistry, to atomically resolved structural dynamics.
Temporal scaling and spatial statistical analyses of groundwater level fluctuations
NASA Astrophysics Data System (ADS)
Sun, H.; Yuan, L., Sr.; Zhang, Y.
2017-12-01
Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
Limits on the location of planetesimal formation in self-gravitating protostellar discs
NASA Astrophysics Data System (ADS)
Clarke, C. J.; Lodato, G.
2009-09-01
In this Letter, we show that if planetesimals form in spiral features in self-gravitating discs, as previously suggested by the idealized simulations of Rice et al., then in realistic protostellar discs, this process will be restricted to the outer regions of the disc (i.e. at radii in excess of several tens of au). This restriction relates to the requirement that dust has to be concentrated in spiral features on a time-scale that is less than the (roughly dynamical) lifetime of such features, and that such rapid accumulation requires spiral features whose fractional amplitude is not much less than unity. This in turn requires that the cooling time-scale of the gas is relatively short, which restricts the process to the outer disc. We point out that the efficient conversion of a large fraction of the primordial dust in the disc into planetesimals could rescue this material from the well-known problem of rapid inward migration at an approximate metre-size scale and that in principle the collisional evolution of these objects could help to resupply small dust to the protostellar disc. We also point out the possible implications of this scenario for the location of planetesimal belts inferred in debris discs around main sequence stars, but stress that further dynamical studies are required in order to establish whether the disc retains a memory of the initial site of planetesimal creation.
Decorrelation scales for Arctic Ocean hydrography - Part I: Amerasian Basin
NASA Astrophysics Data System (ADS)
Sumata, Hiroshi; Kauker, Frank; Karcher, Michael; Rabe, Benjamin; Timmermans, Mary-Louise; Behrendt, Axel; Gerdes, Rüdiger; Schauer, Ursula; Shimada, Koji; Cho, Kyoung-Ho; Kikuchi, Takashi
2018-03-01
Any use of observational data for data assimilation requires adequate information of their representativeness in space and time. This is particularly important for sparse, non-synoptic data, which comprise the bulk of oceanic in situ observations in the Arctic. To quantify spatial and temporal scales of temperature and salinity variations, we estimate the autocorrelation function and associated decorrelation scales for the Amerasian Basin of the Arctic Ocean. For this purpose, we compile historical measurements from 1980 to 2015. Assuming spatial and temporal homogeneity of the decorrelation scale in the basin interior (abyssal plain area), we calculate autocorrelations as a function of spatial distance and temporal lag. The examination of the functional form of autocorrelation in each depth range reveals that the autocorrelation is well described by a Gaussian function in space and time. We derive decorrelation scales of 150-200 km in space and 100-300 days in time. These scales are directly applicable to quantify the representation error, which is essential for use of ocean in situ measurements in data assimilation. We also describe how the estimated autocorrelation function and decorrelation scale should be applied for cost function calculation in a data assimilation system.
NASA Astrophysics Data System (ADS)
Rajabi, F.; Battiato, I.
2016-12-01
Long term predictions of the impact of anthropogenic stressors on the environment is essential to reduce the risks associated with processes such as CO2 sequestration and nuclear waste storage in the subsurface. On the other hand, transient forcing factors (e.g. time-varying injection or pumping rate) with evolving heterogeneity of time scales spanning from days to years can influence transport phenomena at the pore scale. A comprehensive spatio-temporal prediction of reactive transport in porous media under time-dependent forcing factors for thousands of years requires the formulation of continuum scale models for time-averages. Yet, as every macroscopic model, time-averaged models can loose predictivity and accuracy when certain conditions are violated. This is true whenever lack of temporal and spatial scale separation occurs and it makes the continuum scale equation a poor assumption for the processes at the pore scale. In this work, we consider mass transport of a dissolved species undergoing a heterogeneous reaction and subject to time-varying boundary conditions in a periodic porous medium. By means of homogenization method and asymptotic expansion technique, we derive a macro-time continuum-scale equation as well as expressions for its effective properties. Our analysis demonstrates that the dynamics at the macro-scale is strongly influenced by the interplay between signal frequency at the boundary and transport processes at the pore level. In addition, we provide the conditions under which the space-time averaged equations accurately describe pore-scale processes. To validate our theoretical predictions, we consider a thin fracture with reacting walls and transient boundary conditions at the inlet. Our analysis shows a good agreement between numerical simulations and theoretical predictions. Furthermore, our numerical experiments show that mixing patterns of the contaminant plumes at the pore level strongly depend on the signal frequency.
de Tormes Eby, Lillian Turner; Laschober, Tanja C.
2013-01-01
In 2008, New York State required substance use disorder treatment organizations to be 100% tobacco-free. This longitudinal study examined clinicians’ perceptions of the implementation extensiveness of the tobacco-free practices approximately 10–12 months (Time 1) and 20–24 months (Time 2) post regulation and investigated whether clinicians’ commitment to change and use of provided resources at Time 1 predicts perceptions of implementation extensiveness at Time 2. Clinicians (N = 287) noted a mean implementation of 5.60 patient practices (0–10 scale), 2.33 visitor practices (0–8 scale), and 6.66 employee practices (0–12 scale) at Time 1. At Time 2, clinicians perceived a mean implementation of 5.95 patient practices (no increase from Time 1), 2.89 visitor practices (increase from Time 1), and 7.12 employee practices (no increase from Time 1). Commitment to change and use of resources positively predicted perceived implementation extensiveness of visitor and employee practices. The use of resources positively predicted implementation for patient practices. PMID:23430285
Eby, Lillian T de Tormes; Laschober, Tanja C
2014-01-01
In 2008, the state of New York required substance use disorder treatment organizations to be 100% tobacco-free. This longitudinal study examined clinicians' perceptions of the implementation extensiveness of the tobacco-free practices approximately 10-12 months (Time 1) and 20-24 months (Time 2) post regulation and investigated whether clinicians' commitment to change and use of provided resources at Time 1 predicts perceptions of implementation extensiveness at Time 2. Clinicians (N = 287) noted a mean implementation of 5.60 patient practices (0-10 scale), 2.33 visitor practices (0-8 scale), and 6.66 employee practices (0-12 scale) at Time 1. At Time 2, clinicians perceived a mean implementation of 5.95 patient practices (no increase from Time 1), 2.89 visitor practices (increase from Time 1), and 7.12 employee practices (no increase from Time 1). Commitment to change and use of resources positively predicted perceived implementation extensiveness of visitor and employee practices. The use of resources positively predicted implementation for patient practices.
Singular perturbations and time scales in the design of digital flight control systems
NASA Technical Reports Server (NTRS)
Naidu, Desineni S.; Price, Douglas B.
1988-01-01
The results are presented of application of the methodology of Singular Perturbations and Time Scales (SPATS) to the control of digital flight systems. A block diagonalization method is described to decouple a full order, two time (slow and fast) scale, discrete control system into reduced order slow and fast subsystems. Basic properties and numerical aspects of the method are discussed. A composite, closed-loop, suboptimal control system is constructed as the sum of the slow and fast optimal feedback controls. The application of this technique to an aircraft model shows close agreement between the exact solutions and the decoupled (or composite) solutions. The main advantage of the method is the considerable reduction in the overall computational requirements for the evaluation of optimal guidance and control laws. The significance of the results is that it can be used for real time, onboard simulation. A brief survey is also presented of digital flight systems.
Towards a study of synoptic-scale variability of the California current system
NASA Technical Reports Server (NTRS)
1985-01-01
A West Coast satellite time series advisory group was established to consider the scientific rationale for the development of complete west coast time series of imagery of sea surface temperature (as derived by the Advanced Very High Resolution Radiometer on the NOAA polar orbiter, and near-surface phytoplankton pigment concentrations (as derived by the Coastal Zone Color Scanner on Nimbus 7). The scientific and data processing requirements for such time series are also considered. It is determined that such time series are essential if a number of scientific questions regarding the synoptic-scale dynamics of the California Current System are to be addressed. These questions concern both biological and physical processes.
ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications
NASA Astrophysics Data System (ADS)
Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.
2003-12-01
The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.
Multiple Time Series Node Synchronization Utilizing Ambient Reference
2014-12-31
assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The severe requirements that Special...processing targeted to performance assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The...research community and it is well documented and characterized. The datasets considered from this project (listed below) were used to derive the
We produced a scientifically defensible methodology to assess whether a regional system is on a sustainable path. The approach required readily available data, metrics applicable to the relevant scale, and results useful to decision makers. We initiated a pilot project to test ...
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
AUTOPLAN: A PC-based automated mission planning tool
NASA Technical Reports Server (NTRS)
Paterra, Frank C.; Allen, Marc S.; Lawrence, George F.
1987-01-01
A PC-based automated mission and resource planning tool, AUTOPLAN, is described, with application to small-scale planning and scheduling systems in the Space Station program. The input is a proposed mission profile, including mission duration, number of allowable slip periods, and requirement profiles for one or more resources as a function of time. A corresponding availability profile is also entered for each resource over the whole time interval under study. AUTOPLAN determines all integrated schedules which do not require more than the available resources.
NASA Astrophysics Data System (ADS)
Pressel, K. G.; Collins, W.; Desai, A. R.
2011-12-01
Deficiencies in the parameterization of boundary layer clouds in global climate models (GCMs) remains one of the greatest sources of uncertainty in climate change predictions. Many GCM cloud parameterizations, which seek to include some representation of subgrid-scale cloud variability, do so by making assumptions regarding the subgrid-scale spatial probability density function (PDF) of total water content. Properly specifying the form and parameters of the total water PDF is an essential step in the formulation of PDF based cloud parameterizations. In the cloud free boundary layer, the PDF of total water mixing ratio is equivalent to the PDF of water vapor mixing ratio. Understanding the PDF of water vapor mixing ratio in the cloud free atmosphere is a necessary step towards understanding the PDF of water vapor in the cloudy atmosphere. A primary challenge in empirically constraining the PDF of water vapor mixing ratio is a distinct lack of a spatially distributed observational dataset at or near cloud scale. However, at meso-beta (20-50km) and larger scales, there is a wealth of information on the spatial distribution of water vapor contained in the physically retrieved water vapor profiles from the Atmospheric Infrared Sounder onboard NASA`s Aqua satellite. The scaling (scale-invariance) of the observed water vapor field has been suggested as means of using observations at satellite observed (meso-beta) scales to derive information about cloud scale PDFs. However, doing so requires the derivation of a robust climatology of water vapor scaling from in-situ observations across the meso- gamma (2-20km) and meso-beta scales. In this work, we present the results of the scaling of high frequency (10Hz) time series of water vapor mixing ratio as observed from the 447m WLEF tower located near Park Falls, Wisconsin. Observations from a tall tower offer an ideal set of observations with which to investigate scaling at meso-gamma and meso-beta scales requiring only the assumption of Taylor`s Hypothesis to convert observed time scales to spatial scales. Furthermore, the WLEF tower holds an instrument suite offering a diverse set of variables at the 396m, 122m, and 30m levels with which to characterize the state of the boundary layer. Three methods are used to compute scaling exponents for the observed time series; poor man`s variance spectra, first order structure functions, and detrended fluctuation analysis. In each case scaling exponents are computed by linear regression. The results for each method are compared and used to build a climatology of scaling exponents. In particular, the results for June 2007 are presented, and it is shown that the scaling of water vapor time series at the 396m level is characterized by two regimes that are determined by the state of the boundary layer. Finally, the results are compared to, and shown to be roughly consistent with, scaling exponents computed from AIRS observations.
Spin pumping driven auto-oscillator for phase-encoded logic—device design and material requirements
NASA Astrophysics Data System (ADS)
Rakheja, S.; Kani, N.
2017-05-01
In this work, we propose a spin nano-oscillator (SNO) device where information is encoded in the phase (time-shift) of the output oscillations. The spin current required to set up the oscillations in the device is generated through spin pumping from an input nanomagnet that is precessing at RF frequencies. We discuss the operation of the SNO device, in which either the in-plane (IP) or out-of-plane (OOP) magnetization oscillations are utilized toward implementing ultra-low-power circuits. Using physical models of the nanomagnet dynamics and the spin transport through non-magnetic channels, we quantify the reliability of the SNO device using a "scaling ratio". Material requirements for the nanomagnet and the channel to ensure correct logic functionality are identified using the scaling ratio metric. SNO devices consume (2-5)× lower energy compared to CMOS devices and other spin-based devices with similar device sizes and material parameters. The analytical models presented in this work can be used to optimize the performance and scaling of SNO devices in comparison to CMOS devices at ultra-scaled technology nodes.
Command and Control for Large-Scale Hybrid Warfare Systems
2014-06-05
Prescribed by ANSI Std Z39-18 2 CK Pang et al. in C2 architectures was proposed using Petri nets (PNs).10 Liao in [11] reported an architecture for...arises from the chal- lenging and often-conflicting user requirements, scale, scope, inter-connectivity with different large-scale net - worked teams and...resources can be easily modelled and reconfigured by the notion of block matrix. At any time, the various missions of the net - worked team can be added
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
NASA Astrophysics Data System (ADS)
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
Late-time cosmological phase transitions
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.
A comparison of small tractors for thinning central hardwoods
N. Huyler; C.B. LeDoux
1991-01-01
Young-growth hardwood forests in the central hardwood region will require intensive management if they are to help meet the Nation's increasing demand for wood. Such management generally will require entries into the stands when the trees are small. Many small-scale machines are available for harvesting small wood. Time and motion studies were conducted on small-...
On the time-scales of magmatism at island-arc volcanoes.
Turner, S P
2002-12-15
Precise information on time-scales and rates of change is fundamental to an understanding of natural processes and the development of quantitative physical models in the Earth sciences. U-series isotope studies are revolutionizing this field by providing time information in the range 10(2)-10(4) years, which is similar to that of many modern Earth processes. I review how the application of U-series isotopes has been used to constrain the time-scales of magma formation, ascent and storage beneath island-arc volcanoes. Different elements are distilled-off the subducting plate at different times and in different places. Contributions from subducted sediments to island-arc lava sources appear to occur some 350 kyr to 4 Myr prior to eruption. Fluid release from the subducting oceanic crust into the mantle wedge may be a multi-stage process and occurs over a period ranging from a few hundred kyr to less than one kyr prior to eruption. This implies that dehydration commences prior to the initiation of partial melting within the mantle wedge, which is consistent with recent evidence that the onset of melting is controlled by an isotherm and thus the thermal structure within the wedge. U-Pa disequilibria appear to require a component of decompression melting, possibly due to the development of gravitational instabilities. The preservation of large (226)Ra disequilibria permits only a short period of time between fluid addition and eruption. This requires rapid melt segregation, magma ascent by channelled flow and minimal residence time within the lithosphere. The evolution from basalt to basaltic andesite probably occurs rapidly during ascent or in magma reservoirs inferred from some geophysical data to lie within the lithospheric mantle. The flux across the Moho is broadly andesitic, and some magmas subsequently stall in more shallow crustal-level magma chambers, where they evolve to more differentiated compositions on time-scales of a few thousand years or less.
NASA Astrophysics Data System (ADS)
Gat, Amir; Friedman, Yonathan
2017-11-01
The characteristic time of low-Reynolds number fluid-structure interaction scales linearly with the ratio of fluid viscosity to solid Young's modulus. For sufficiently large values of Young's modulus, both time- and length-scales of the viscous-elastic dynamics may be similar to acoustic time- and length-scales. However, the requirement of dominant viscous effects limits the validity of such regimes to micro-configurations. We here study the dynamics of an acoustic plane wave impinging on the surface of a layered sphere, immersed within an inviscid fluid, and composed of an inner elastic sphere, a creeping fluid layer and an external elastic shell. We focus on configurations with similar viscous-elastic and acoustic time- and length-scales, where the viscous-elastic speed of interaction between the creeping layer and the elastic regions is similar to the speed of sound. By expanding the linearized spherical Reynolds equation into the relevant spectral series solution for the hyperbolic elastic regions, a global stiffness matrix of the layered elastic sphere was obtained. This work relates viscous-elastic dynamics to acoustic scattering and may pave the way to the design of novel meta-materials with unique acoustic properties. ISF 818/13.
Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series
NASA Astrophysics Data System (ADS)
Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth
2017-12-01
The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.
Towards Remotely Sensed Composite Global Drought Risk Modelling
NASA Astrophysics Data System (ADS)
Dercas, Nicholas; Dalezios, Nicolas
2015-04-01
Drought is a multi-faceted issue and requires a multi-faceted assessment. Droughts may have the origin on precipitation deficits, which sequentially and by considering different time and space scales may impact soil moisture, plant wilting, stream flow, wildfire, ground water levels, famine and social impacts. There is a need to monitor drought even at a global scale. Key variables for monitoring drought include climate data, soil moisture, stream flow, ground water, reservoir and lake levels, snow pack, short-medium-long range forecasts, vegetation health and fire danger. However, there is no single definition of drought and there are different drought indicators and indices even for each drought type. There are already four operational global drought risk monitoring systems, namely the U.S. Drought Monitor, the European Drought Observatory (EDO), the African and the Australian systems, respectively. These systems require further research to improve the level of accuracy, the time and space scales, to consider all types of drought and to achieve operational efficiency, eventually. This paper attempts to contribute to the above mentioned objectives. Based on a similar general methodology, the multi-indicator approach is considered. This has resulted from previous research in the Mediterranean region, an agriculturally vulnerable region, using several drought indices separately, namely RDI and VHI. The proposed scheme attempts to consider different space scaling based on agroclimatic zoning through remotely sensed techniques and several indices. Needless to say, the agroclimatic potential of agricultural areas has to be assessed in order to achieve sustainable and efficient use of natural resources in combination with production maximization. Similarly, the time scale is also considered by addressing drought-related impacts affected by precipitation deficits on time scales ranging from a few days to a few months, such as non-irrigated agriculture, topsoil moisture, wildfire danger, range and pasture conditions and unregulated stream flows. Keywords Remote sensing; Composite Drought Indicators; Global Drought Risk Monitoring.
Engineering-scale experiments of solar photocatalytic oxidation of trichloroethylene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pacheco, J.; Prairie, M.; Evans, L.
1990-01-01
A photocatalytic process is being developed to destroy organic contaminants in water. Tests with a common water pollutant, trichlorethylene (TCE), were conducted at the Solar Thermal Test Facility at Sandia with trough systems. Tests at this scale provide verification of laboratory studies and allow examination of design and operation issues that only arise in experiments on a realistic scale. The catalyst, titanium dioxide (TiO{sub 2}), is a harmless material found in paint, cosmetics and even toothpaste. We examined the effect of initial contaminant concentration and the effect of hydrogen peroxide on the photocatalytic decomposition of trichlorethylene (TCE). An aqueous solutionmore » of 5000 parts per billion (ppB) TCE with 0.1 weight {percent} suspended titanium dioxide catalyst required approximately 4.2 minutes of exposure to destroy the TCE to a detection limit of 5 ppB. For a 300 ppB TCE solution, the time required was only 2.5 minutes to reach the same level of destruction. Adding 250 parts per million (ppM) of hydrogen peroxide reduced the time required by about 1 minute. A two parameter Langmuir Hinshelwood model was able to describe the data. A simple flow apparatus was built to test four fixed catalyst supports and to measure their pressure drop and assess their ability to withstand flow conditions typical of a full-sized system. In this paper, we summarize the engineering-scale testing and results. 16 refs., 5 figs.« less
A fast, parallel algorithm to solve the basic fluvial erosion/transport equations
NASA Astrophysics Data System (ADS)
Braun, J.
2012-04-01
Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.
Propulsion engineering study for small-scale Mars missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, J.
1995-09-12
Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardwaremore » mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.« less
NASA Astrophysics Data System (ADS)
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.
Understanding relationships among ecosystem services across spatial scales and over time
NASA Astrophysics Data System (ADS)
Qiu, Jiangxiao; Carpenter, Stephen R.; Booth, Eric G.; Motew, Melissa; Zipper, Samuel C.; Kucharik, Christopher J.; Loheide, Steven P., II; Turner, Monica G.
2018-05-01
Sustaining ecosystem services (ES), mitigating their tradeoffs and avoiding unfavorable future trajectories are pressing social-environmental challenges that require enhanced understanding of their relationships across scales. Current knowledge of ES relationships is often constrained to one spatial scale or one snapshot in time. In this research, we integrated biophysical modeling with future scenarios to examine changes in relationships among eight ES indicators from 2001–2070 across three spatial scales—grid cell, subwatershed, and watershed. We focused on the Yahara Watershed (Wisconsin) in the Midwestern United States—an exemplar for many urbanizing agricultural landscapes. Relationships among ES indicators changed over time; some relationships exhibited high interannual variations (e.g. drainage vs. food production, nitrate leaching vs. net ecosystem exchange) and even reversed signs over time (e.g. perennial grass production vs. phosphorus yield). Robust patterns were detected for relationships among some regulating services (e.g. soil retention vs. water quality) across three spatial scales, but other relationships lacked simple scaling rules. This was especially true for relationships of food production vs. water quality, and drainage vs. number of days with runoff >10 mm, which differed substantially across spatial scales. Our results also showed that local tradeoffs between food production and water quality do not necessarily scale up, so reducing local tradeoffs may be insufficient to mitigate such tradeoffs at the watershed scale. We further synthesized these cross-scale patterns into a typology of factors that could drive changes in ES relationships across scales: (1) effects of biophysical connections, (2) effects of dominant drivers, (3) combined effects of biophysical linkages and dominant drivers, and (4) artificial scale effects, and concluded with management implications. Our study highlights the importance of taking a dynamic perspective and accounting for spatial scales in monitoring and management to sustain future ES.
NASA Astrophysics Data System (ADS)
Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.
2017-12-01
Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.
Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Massie, Michael J.; Morris, A. Terry
2010-01-01
Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.
NASA Astrophysics Data System (ADS)
Coppola, A.; Comegna, V.; de Simone, L.
2009-04-01
Non-point source (NPS) pollution in the vadose zone is a global environmental problem. The knowledge and information required to address the problem of NPS pollutants in the vadose zone cross several technological and sub disciplinary lines: spatial statistics, geographic information systems (GIS), hydrology, soil science, and remote sensing. The main issues encountered by NPS groundwater vulnerability assessment, as discussed by Stewart [2001], are the large spatial scales, the complex processes that govern fluid flow and solute transport in the unsaturated zone, the absence of unsaturated zone measurements of diffuse pesticide concentrations in 3-D regional-scale space as these are difficult, time consuming, and prohibitively costly, and the computational effort required for solving the nonlinear equations for physically-based modeling of regional scale, heterogeneous applications. As an alternative solution, here is presented an approach that is based on coupling of transfer function and GIS modeling that: a) is capable of solute concentration estimation at a depth of interest within a known error confidence class; b) uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application; c) can dynamically support decision making through thematic mapping and 3D scenarios This result was pursued through 1) the design and building of a spatial database containing environmental and physical information regarding the study area, 2) the development of the transfer function procedure for layered soils, 3) the final representation of results through digital mapping and 3D visualization. One side GIS modeled environmental data in order to characterize, at regional scale, soil profile texture and depth, land use, climatic data, water table depth, potential evapotranspiration; on the other side such information was implemented in the up-scaling procedure of the Jury's TFM resulting in a set of texture based travel time probability density functions for layered soils each describing a characteristic leaching behavior for soil profiles with similar hydraulic properties. Such behavior, in terms of solute travel time to water table, was then imported back into GIS and finally estimation groundwater vulnerability for each soil unit was represented into a map as well as visualized in 3D.
NASA Technical Reports Server (NTRS)
Rubinstein, Robert
1999-01-01
In rotating turbulence, stably stratified turbulence, and in rotating stratified turbulence, heuristic arguments concerning the turbulent time scale suggest that the inertial range energy spectrum scales as k(exp -2). From the viewpoint of weak turbulence theory, there are three possibilities which might invalidate these arguments: four-wave interactions could dominate three-wave interactions leading to a modified inertial range energy balance, double resonances could alter the time scale, and the energy flux integral might not converge. It is shown that although double resonances exist in all of these problems, they do not influence overall energy transfer. However, the resonance conditions cause the flux integral for rotating turbulence to diverge logarithmically when evaluated for a k(exp -2) energy spectrum; therefore, this spectrum requires logarithmic corrections. Finally, the role of four-wave interactions is briefly discussed.
Monitoring vegetation phenology using MODIS
Zhang, Xiayong; Friedl, Mark A.; Schaaf, Crystal B.; Strahler, Alan H.; Hodges, John C.F.; Gao, Feng; Reed, Bradley C.; Huete, Alfredo
2003-01-01
Accurate measurements of regional to global scale vegetation dynamics (phenology) are required to improve models and understanding of inter-annual variability in terrestrial ecosystem carbon exchange and climate–biosphere interactions. Since the mid-1980s, satellite data have been used to study these processes. In this paper, a new methodology to monitor global vegetation phenology from time series of satellite data is presented. The method uses series of piecewise logistic functions, which are fit to remotely sensed vegetation index (VI) data, to represent intra-annual vegetation dynamics. Using this approach, transition dates for vegetation activity within annual time series of VI data can be determined from satellite data. The method allows vegetation dynamics to be monitored at large scales in a fashion that it is ecologically meaningful and does not require pre-smoothing of data or the use of user-defined thresholds. Preliminary results based on an annual time series of Moderate Resolution Imaging Spectroradiometer (MODIS) data for the northeastern United States demonstrate that the method is able to monitor vegetation phenology with good success.
Wave induced density modification in RF sheaths and close to wave launchers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Eester, D., E-mail: d.van.eester@fz-juelich.de; Crombé, K.; Department of Applied Physics, Ghent University, Ghent
2015-12-10
With the return to full metal walls - a necessary step towards viable fusion machines - and due to the high power densities of current-day ICRH (Ion Cyclotron Resonance Heating) or RF (radio frequency) antennas, there is ample renewed interest in exploring the reasons for wave-induced sputtering and formation of hot spots. Moreover, there is experimental evidence on various machines that RF waves influence the density profile close to the wave launchers so that waves indirectly influence their own coupling efficiency. The present study presents a return to first principles and describes the wave-particle interaction using a 2-time scale modelmore » involving the equation of motion, the continuity equation and the wave equation on each of the time scales. Through the changing density pattern, the fast time scale dynamics is affected by the slow time scale events. In turn, the slow time scale density and flows are modified by the presence of the RF waves through quasilinear terms. Although finite zero order flows are identified, the usual cold plasma dielectric tensor - ignoring such flows - is adopted as a first approximation to describe the wave response to the RF driver. The resulting set of equations is composed of linear and nonlinear equations and is tackled in 1D in the present paper. Whereas the former can be solved using standard numerical techniques, the latter require special handling. At the price of multiple iterations, a simple ’derivative switch-on’ procedure allows to reformulate the nonlinear problem as a sequence of linear problems. Analytical expressions allow a first crude assessment - revealing that the ponderomotive potential plays a role similar to that of the electrostatic potential arising from charge separation - but numerical implementation is required to get a feeling of the full dynamics. A few tentative examples are provided to illustrate the phenomena involved.« less
Development of a low energy electron spectrometer for SCOPE
NASA Astrophysics Data System (ADS)
Tominaga, Yuu; Saito, Yoshifumi; Yokota, Shoichiro
We are newly developing a low-energy charged particle analyzer for the future satellite mission SCOPE (cross Scale COupling in the Plasma universE). The main purpose of the mission is to understand the cross scale coupling between macroscopic MHD scale phenomena and microscopic ion and electron-scale phenomena. In order to under-stand the dynamics of plasma in small scales, we need to observe the plasma with an analyzer which has high time resolution. For ion-scale phenomena, the time resolution must be as high as ion cyclotron frequency (-10 sec) in Earth's magnetosphere. However, for electron-scale phe-nomena, the time resolution must be as high as electron cyclotron frequency (-1 msec). The GEOTAIL satellite that observes Earth's magnetosphere has the analyzer whose time resolution is 12 sec, so the satellite can observe ion-scale phenomena. However in the SCOPE mission, we will go further to observe electron-scale phenomena. Then we need analyzers that have at least several msec time resolution. Besides, we need to make the analyzer as small as possible for the volume and weight restrictions of the satellite. The diameter of the top-hat analyzer must be smaller than 20 cm. In this study, we are developing an electrostatic analyzer that meets such requirements using numerical simulations. The electrostatic analyzer is a spherical/toroidal top-hat electrostatic analyzer with three nested spherical/toroidal deflectors. Using these deflectors, the analyzer measures charged particles simultaneously in two different energy ranges. Therefore time res-olution of the analyzer can be doubled. With the analyzer, we will measure energies from 10 eV to 22.5 keV. In order to obtain three-dimensional distribution functions of low energy parti-cles, the analyzer must have 4-pi str field of view. Conventional electrostatic analyzers use the spacecraft spin to have 4-pi field of view. So the time resolution of the analyzer depends on the spin frequency of the spacecraft. However, we cannot secure the several msec time resolution by using the spacecraft spin. In the SCOPE mission, we set 8 pairs of two nested electrostatic analyzers on each side of the spacecraft, which enable us to secure 4-pi field of view altogether. Then the time resolution of the analyzer does not depend on the spacecraft spin. Given that the sampling time of the analyzer is 0.5 msec, the time resolution of the analyzer can be 8 msec. In order to secure the time resolution as high as 10 msec, the geometric factor of the analyzer has to be as high as 8*10-3 (cm2 str eV/eV/22.5deg). Higher geometric factor requires bigger instrument. However, we have to reduce the volume and weight of the instrument to set it on the satellite. Under these restrictions, we have realized the analyzer which has the geometric factors of 7.5*10-3 (cm2 str eV/eV/22.5deg) (inner sphere) and 10.0*10-3 (cm2 str eV/eV/22.5deg) (outer sphere) with diameter of 17.4 cm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ping; Yuan, Renliang; Zuo, Jian Min
Abstract Elemental mapping at the atomic-scale by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) provides a powerful real-space approach to chemical characterization of crystal structures. However, applications of this powerful technique have been limited by inefficient X-ray emission and collection, which require long acquisition times. Recently, using a lattice-vector translation method, we have shown that rapid atomic-scale elemental mapping using STEM-EDS can be achieved. This method provides atomic-scale elemental maps averaged over crystal areas of ~few 10 nm 2with the acquisition time of ~2 s or less. Here we report the details of this method, and, inmore » particular, investigate the experimental conditions necessary for achieving it. It shows, that in addition to usual conditions required for atomic-scale imaging, a thin specimen is essential for the technique to be successful. Phenomenological modeling shows that the localization of X-ray signals to atomic columns is a key reason. The effect of specimen thickness on the signal delocalization is studied by multislice image simulations. The results show that the X-ray localization can be achieved by choosing a thin specimen, and the thickness of less than about 22 nm is preferred for SrTiO 3in [001] projection for 200 keV electrons.« less
Lu, Ping; Yuan, Renliang; Zuo, Jian Min
2017-02-23
Abstract Elemental mapping at the atomic-scale by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) provides a powerful real-space approach to chemical characterization of crystal structures. However, applications of this powerful technique have been limited by inefficient X-ray emission and collection, which require long acquisition times. Recently, using a lattice-vector translation method, we have shown that rapid atomic-scale elemental mapping using STEM-EDS can be achieved. This method provides atomic-scale elemental maps averaged over crystal areas of ~few 10 nm 2with the acquisition time of ~2 s or less. Here we report the details of this method, and, inmore » particular, investigate the experimental conditions necessary for achieving it. It shows, that in addition to usual conditions required for atomic-scale imaging, a thin specimen is essential for the technique to be successful. Phenomenological modeling shows that the localization of X-ray signals to atomic columns is a key reason. The effect of specimen thickness on the signal delocalization is studied by multislice image simulations. The results show that the X-ray localization can be achieved by choosing a thin specimen, and the thickness of less than about 22 nm is preferred for SrTiO 3in [001] projection for 200 keV electrons.« less
Lu, Ping; Yuan, Renliang; Zuo, Jian Min
2017-02-01
Elemental mapping at the atomic-scale by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) provides a powerful real-space approach to chemical characterization of crystal structures. However, applications of this powerful technique have been limited by inefficient X-ray emission and collection, which require long acquisition times. Recently, using a lattice-vector translation method, we have shown that rapid atomic-scale elemental mapping using STEM-EDS can be achieved. This method provides atomic-scale elemental maps averaged over crystal areas of ~few 10 nm2 with the acquisition time of ~2 s or less. Here we report the details of this method, and, in particular, investigate the experimental conditions necessary for achieving it. It shows, that in addition to usual conditions required for atomic-scale imaging, a thin specimen is essential for the technique to be successful. Phenomenological modeling shows that the localization of X-ray signals to atomic columns is a key reason. The effect of specimen thickness on the signal delocalization is studied by multislice image simulations. The results show that the X-ray localization can be achieved by choosing a thin specimen, and the thickness of less than about 22 nm is preferred for SrTiO3 in [001] projection for 200 keV electrons.
Information transfer across the scales of climate data variability
NASA Astrophysics Data System (ADS)
Palus, Milan; Jajcay, Nikola; Hartman, David; Hlinka, Jaroslav
2015-04-01
Multitude of scales characteristic of the climate system variability requires innovative approaches in analysis of instrumental time series. We present a methodology which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited band-with, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. In particular, an information-theoretic formulation of the generalized, nonlinear Granger causality is applied together with surrogate data testing methods [1]. The method [2] uncovers causal influence (in the Granger sense) and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of daily mean surface air temperature from various European locations an information transfer from larger to smaller scales has been observed as the influence of the phase of slow oscillatory phenomena with periods around 7-8 years on amplitudes of the variability characterized by smaller temporal scales from a few months to annual and quasi-biennial scales [3]. In sea surface temperature data from the tropical Pacific area an influence of quasi-oscillatory phenomena with periods around 4-6 years on the variability on and near the annual scale has been observed. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [2] M. Palus, Entropy 16(10), 5263-5289 (2014) [3] M. Palus, Phys. Rev. Lett. 112, 078702 (2014)
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
The real-time control of planetary rovers through behavior modification
NASA Technical Reports Server (NTRS)
Miller, David P.
1991-01-01
It is not yet clear of what type, and how much, intelligence is needed for a planetary rover to function semi-autonomously on a planetary surface. Current designs assume an advanced AI system that maintains a detailed map of its journeys and the surroundings, and that carefully calculates and tests every move in advance. To achieve these abilities, and because of the limitations of space-qualified electronics, the supporting rover is quite sizable, massing a large fraction of a ton, and requiring technology advances in everything from power to ground operations. An alternative approach is to use a behavior driven control scheme. Recent research has shown that many complex tasks may be achieved by programming a robot with a set of behaviors and activation or deactivating a subset of those behaviors as required by the specific situation in which the robot finds itself. Behavior control requires much less computation than is required by tradition AI planning techniques. The reduced computation requirements allows the entire rover to be scaled down as appropriate (only down-link communications and payload do not scale under these circumstances). The missions that can be handled by the real-time control and operation of a set of small, semi-autonomous, interacting, behavior-controlled planetary rovers are discussed.
Time-dependent Schrödinger equation for molecular core-hole dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picón, A.
2017-02-01
X-ray spectroscopy is an important tool for the investigation of matter. X rays primarily interact with inner-shell electrons, creating core (inner-shell) holes that will decay on the time scale of attoseconds to a few femtoseconds through electron relaxations involving the emission of a photon or an electron. Furthermore, the advent of femtosecond x-ray pulses expands x-ray spectroscopy to the time domain and will eventually allow the control of core-hole population on time scales comparable to core-vacancy lifetimes. For both cases, a theoretical approach that accounts for the x-ray interaction while the electron relaxations occur is required. We describe a time-dependentmore » framework, based on solving the time-dependent Schrödinger equation, that is suitable for describing the induced electron and nuclear dynamics.« less
International two-way satellite time transfers using INTELSAT space segment and small Earth stations
NASA Technical Reports Server (NTRS)
Veenstra, Lester B.
1990-01-01
The satellite operated by the International Telecommunications Satellite Organization (INTELSAT) provides new and unique capabilities for the coordinates of international time scales on a world wide basis using the two-way technique. A network of coordinated clocks using small earth stations collocated with the scales is possible. Antennas as small as 1.8 m at K-band and 3 m at C-band transmitting powers of less than 1 W will provide signals with time jitters of less than 1 ns existing spread spectrum modems. One way time broadcasting is also possible, under the INTELSAT INTELNET system, possibly using existing international data distribution (press and financial) systems that are already operating spread spectrum systems. The technical details of the satellite and requirements on satellite earth stations are given. The resources required for a regular operational international time transfer service are analyzed with respect to the existing international digital service offerings of the INTELSAT Business Service (IBS) and INTELNET. Coverage areas, typical link budgets, and a summary of previous domestic and international work using this technique are provided. Administrative procedures for gaining access to the space segment are outlined. Contact information for local INTELSAT signatories is listed.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.
Demchak, Barry; Krüger, Ingolf
2012-07-01
The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.
GEWEX America Prediction Project (GAPP) Science and Implementation Plan
NASA Technical Reports Server (NTRS)
2004-01-01
The purpose of this Science and Implementation Plan is to describe GAPP science objectives and the activities required to meet these objectives, both specifically for the near-term and more generally for the longer-term. The GEWEX Americas Prediction Project (GAPP) is part of the Global Energy and Water Cycle Experiment (GEWEX) initiative that is aimed at observing, understanding and modeling the hydrological cycle and energy fluxes at various time and spatial scales. The mission of GAPP is to demonstrate skill in predicting changes in water resources over intraseasonal-to-interannual time scales, as an integral part of the climate system.
NASA Astrophysics Data System (ADS)
Antonenko, I.; Osinski, G. R.; Battler, M.; Beauchamp, M.; Cupelli, L.; Chanou, A.; Francis, R.; Mader, M. M.; Marion, C.; McCullough, E.; Pickersgill, A. E.; Preston, L. J.; Shankar, B.; Unrau, T.; Veillette, D.
2013-07-01
Remote robotic data provides different information than that obtained from immersion in the field. This significantly affects the geological situational awareness experienced by members of a mission control science team. In order to optimize science return from planetary robotic missions, these limitations must be understood and their effects mitigated to fully leverage the field experience of scientists at mission control.Results from a 13-day analogue deployment at the Mistastin Lake impact structure in Labrador, Canada suggest that scale, relief, geological detail, and time are intertwined issues that impact the mission control science team's effectiveness in interpreting the geology of an area. These issues are evaluated and several mitigation options are suggested. Scale was found to be difficult to interpret without the reference of known objects, even when numerical scale data were available. For this reason, embedding intuitive scale-indicating features into image data is recommended. Since relief is not conveyed in 2D images, both 3D data and observations from multiple angles are required. Furthermore, the 3D data must be observed in animation or as anaglyphs, since without such assistance much of the relief information in 3D data is not communicated. Geological detail may also be missed due to the time required to collect, analyze, and request data.We also suggest that these issues can be addressed, in part, by an improved understanding of the operational time costs and benefits of scientific data collection. Robotic activities operate on inherently slow time-scales. This fact needs to be embraced and accommodated. Instead of focusing too quickly on the details of a target of interest, thereby potentially minimizing science return, time should be allocated at first to more broad data collection at that target, including preliminary surveys, multiple observations from various vantage points, and progressively smaller scale of focus. This operational model more closely follows techniques employed by field geologists and is fundamental to the geologic interpretation of an area. Even so, an operational time cost/benefit analyses should be carefully considered in each situation, to determine when such comprehensive data collection would maximize the science return.Finally, it should be recognized that analogue deployments cannot faithfully model the time scales of robotic planetary missions. Analogue missions are limited by the difficulty and expense of fieldwork. Thus, analogue deployments should focus on smaller aspects of robotic missions and test components in a modular way (e.g., dropping communications constraints, limiting mission scope, focusing on a specific problem, spreading the mission over several field seasons, etc.).
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Petigny, Loïc; Périno, Sandrine; Minuti, Matteo; Visinoni, Francesco; Wajsman, Joël; Chemat, Farid
2014-01-01
Microwave extraction and separation has been used to increase the concentration of the extract compared to the conventional method with the same solid/liquid ratio, reducing extraction time and separate at the same time Volatile Organic Compounds (VOC) from non-Volatile Organic Compounds (NVOC) of boldo leaves. As preliminary study, a response surface method has been used to optimize the extraction of soluble material and the separation of VOC from the plant in laboratory scale. The results from the statistical analysis revealed that the optimized conditions were: microwave power 200 W, extraction time 56 min and solid liquid ratio of 7.5% of plants in water. Lab scale optimized microwave method is compared to conventional distillation, and requires a power/mass ratio of 0.4 W/g of water engaged. This power/mass ratio is kept in order to upscale from lab to pilot plant. PMID:24776762
A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.
NASA Astrophysics Data System (ADS)
Wehner, M. F.; Oliker, L.; Shalf, J.
2008-12-01
Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.
NASA Astrophysics Data System (ADS)
Takasaki, Koichi
This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).
Tidal dissipation in a viscoelastic planet
NASA Technical Reports Server (NTRS)
Ross, M.; Schubert, G.
1986-01-01
Tidal dissipation is examined using Maxwell standard liner solid (SLS), and Kelvin-Voigt models, and viscosity parameters are derived from the models that yield the amount of dissipation previously calculated for a moon model with QW = 100 in a hypothetical orbit closer to the earth. The relevance of these models is then assessed for simulating planetary tidal responses. Viscosities of 10 exp 14 and 10 ex 18 Pa s for the Kelvin-Voigt and Maxwell rheologies, respectively, are needed to match the dissipation rate calculated using the Q approach with a quality factor = 100. The SLS model requires a short time viscosity of 3 x 10 exp 17 Pa s to match the Q = 100 dissipation rate independent of the model's relaxation strength. Since Q = 100 is considered a representative value for the interiors of terrestrial planets, it is proposed that derived viscosities should characterize planetary materials. However, it is shown that neither the Kelvin-Voigt nor the SLS models simulate the behavior of real planetary materials on long time scales. The Maxwell model, by contrast, behaves realistically on both long and short time scales. The inferred Maxwell viscosity, corresponding to the time scale of days, is several times smaller than the longer time scale (greater than or equal to 10 exp 14 years) viscosity of the earth's mantle.
Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations
Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F.; ...
2016-03-04
Chemical imaging at the atomic-scale provides a useful real-space approach to chemically investigate solid crystal structures, and has been recently demonstrated in aberration corrected scanning transmission electron microscopy (STEM). Atomic-scale chemical imaging by STEM using energy-dispersive X-ray spectroscopy (EDS) offers easy data interpretation with a one-to-one correspondence between image and structure but has a severe shortcoming due to the poor efficiency of X-ray generation and collection. As a result, it requires a long acquisition time of typical > few 100 seconds, limiting its potential applications. Here we describe the development of an atomic-scale STEM EDS chemical imaging technique that cutsmore » the acquisition time to one or a few seconds, efficiently reducing the acquisition time by more than 100 times. This method was demonstrated using LaAlO 3 (LAO) as a model crystal. Applying this method to the study of phase transformation induced by electron-beam radiation in a layered lithium transition-metal (TM) oxide, i.e., Li[Li 0.2Ni 0.2Mn 0.6]O 2 (LNMO), a cathode materials for lithium-ion batteries, we obtained a time-series of the atomic-scale chemical imaging, showing the transformation progressing by preferably jumping of Ni atoms from the TM layers into the Li-layers. The new capability offers an opportunity for temporal, atomic-scale chemical mapping of crystal structures for the investigation of materials susceptible to electron irradiation as well as phase transformation and dynamics at the atomic-scale.« less
NASA Astrophysics Data System (ADS)
Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.
2017-12-01
A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.
Changing Schools from the inside out: Small Wins in Hard Times. Third Edition
ERIC Educational Resources Information Center
Larson, Robert
2011-01-01
At any time, public schools labor under great economic, political, and social pressures that make it difficult to create large-scale, "whole school" change. But current top-down mandates require that schools close achievement gaps while teaching more problem solving, inquiry, and research skills--with fewer resources. Failure to meet test-based…
Solute-defect interactions in Al-Mg alloys from diffusive variational Gaussian calculations
NASA Astrophysics Data System (ADS)
Dontsova, E.; Rottler, J.; Sinclair, C. W.
2014-11-01
Resolving atomic-scale defect topologies and energetics with accurate atomistic interaction models provides access to the nonlinear phenomena inherent at atomic length and time scales. Coarse graining the dynamics of such simulations to look at the migration of, e.g., solute atoms, while retaining the rich atomic-scale detail required to properly describe defects, is a particular challenge. In this paper, we present an adaptation of the recently developed "diffusive molecular dynamics" model to describe the energetics and kinetics of binary alloys on diffusive time scales. The potential of the technique is illustrated by applying it to the classic problems of solute segregation to a planar boundary (stacking fault) and edge dislocation in the Al-Mg system. Our approach provides fully dynamical solutions in situations with an evolving energy landscape in a computationally efficient way, where atomistic kinetic Monte Carlo simulations are difficult or impractical to perform.
Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors
Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal
2014-01-01
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782
Viscous-enstrophy scaling law for Navier-Stokes reconnection
NASA Astrophysics Data System (ADS)
Kerr, Robert M.
2017-11-01
Simulations of perturbed, helical trefoil vortex knots and anti-parallel vortices find ν-independent collapse of temporally scaled (√{ ν} Z) - 1 / 2, Z enstrophy, between when the loops first touch at tΓ, and when reconnection ends at tx for the viscosity ν varying by 256. Due to mathematical bounds upon higher-order norms, this collapse requires that the domain increase as ν decreases, possibly to allow large-scale negative helicity to grow as compensation for small-scale positive helicity and enstrophy growth. This mechanism could be a step towards explaining how smooth solutions of the Navier-Stokes can generate finite-energy dissipation in a finite time as ν -> 0 .
Temporal evolution of continental lithospheric strength in actively deforming regions
Thatcher, W.; Pollitz, F.F.
2008-01-01
It has been agreed for nearly a century that a strong, load-bearing outer layer of earth is required to support mountain ranges, transmit stresses to deform active regions and store elastic strain to generate earthquakes. However the dept and extent of this strong layer remain controversial. Here we use a variety of observations to infer the distribution of lithospheric strength in the active western United States from seismic to steady-state time scales. We use evidence from post-seismic transient and earthquake cycle deformation reservoir loading glacio-isostatic adjustment, and lithosphere isostatic adjustment to large surface and subsurface loads. The nearly perfectly elastic behavior of Earth's crust and mantle at the time scale of seismic wave propagation evolves to that of a strong, elastic crust and weak, ductile upper mantle lithosphere at both earthquake cycle (EC, ???10?? to 103 yr) and glacio-isostatic adjustment (GIA, ???103 to 104 yr) time scales. Topography and gravity field correlations indicate that lithosphere isostatic adjustment (LIA) on ???106-107 yr time scales occurs with most lithospheric stress supported by an upper crust overlying a much weaker ductile subtrate. These comparisons suggest that the upper mantle lithosphere is weaker than the crust at all time scales longer than seismic. In contrast, the lower crust has a chameleon-like behavior, strong at EC and GIA time scales and weak for LIA and steady-state deformation processes. The lower crust might even take on a third identity in regions of rapid crustal extension or continental collision, where anomalously high temperatures may lead to large-scale ductile flow in a lower crustal layer that is locally weaker than the upper mantle. Modeling of lithospheric processes in active regions thus cannot use a one-size-fits-all prescription of rheological layering (relation between applied stress and deformation as a function of depth) but must be tailored to the time scale and tectonic setting of the process being investigated.
Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers.
Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S; Coates, Jennifer C
2017-11-01
Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1 ) computer- and tablet-based, 2 ) mobile-based, 3 ) camera-enabled, 4 ) scale-based, 5 ) wearable, and 6 ) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user.
Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers
Bell, Winnie; Colaiezzi, Brooke A; Prata, Cathleen S
2017-01-01
Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1) computer- and tablet-based, 2) mobile-based, 3) camera-enabled, 4) scale-based, 5) wearable, and 6) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user. PMID:29141974
Lindert, Jutta; Bain, Paul A; Kubzansky, Laura D; Stein, Claudia
2015-08-01
Subjective well-being (SWB) contributes to health and mental health. It is a major objective of the new World Health Organization health policy framework, 'Health 2020'. Various approaches to defining and measuring well-being exist. We aimed to identify, map and analyse the contents of self-reported well-being measurement scales for use with individuals more than 15 years of age to help researchers and politicians choose appropriate measurement tools. We conducted a systematic literature search in PubMed for studies published between 2007 and 2012, with additional hand-searching, to identify empirical studies that investigated well-being using a measurement scale. For each eligible study, we identified the measurement tool and reviewed its components, number of items, administration time, validity, reliability, responsiveness and sensitivity. The literature review identified 60 unique measurement scales. Measurement scales were either multidimensional (n = 33) or unidimensional (n = 14) and assessed multiple domains. The most frequently encountered domains were affects (39 scales), social relations (17 scales), life satisfaction (13 scales), physical health (13 scales), meaning/achievement (9 scales) and spirituality (6 scales). The scales included between 1 and 100 items; the administration time varied from 1 to 15 min. Well-being is a higher order construct. Measures seldom reported testing for gender or cultural sensitivity. The content and format of scales varied considerably. Effective monitoring and comparison of SWB over time and across geographic regions will require further work to refine definitions of SWB. We recommend concurrent evaluation of at least three self-reported SWB measurement scales, including evaluation for gender or cultural sensitivity. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
Spicer, Neil; Bhattacharya, Dipankar; Dimka, Ritgak; Fanta, Feleke; Mangham-Jefferies, Lindsay; Schellenberg, Joanna; Tamire-Woldemariam, Addis; Walt, Gill; Wickremasinghe, Deepthi
2014-11-01
Donors and other development partners commonly introduce innovative practices and technologies to improve health in low and middle income countries. Yet many innovations that are effective in improving health and survival are slow to be translated into policy and implemented at scale. Understanding the factors influencing scale-up is important. We conducted a qualitative study involving 150 semi-structured interviews with government, development partners, civil society organisations and externally funded implementers, professional associations and academic institutions in 2012/13 to explore scale-up of innovative interventions targeting mothers and newborns in Ethiopia, the Indian state of Uttar Pradesh and the six states of northeast Nigeria, which are settings with high burdens of maternal and neonatal mortality. Interviews were analysed using a common analytic framework developed for cross-country comparison and themes were coded using Nvivo. We found that programme implementers across the three settings require multiple steps to catalyse scale-up. Advocating for government to adopt and finance health innovations requires: designing scalable innovations; embedding scale-up in programme design and allocating time and resources; building implementer capacity to catalyse scale-up; adopting effective approaches to advocacy; presenting strong evidence to support government decision making; involving government in programme design; invoking policy champions and networks; strengthening harmonisation among external programmes; aligning innovations with health systems and priorities. Other steps include: supporting government to develop policies and programmes and strengthening health systems and staff; promoting community uptake by involving media, community leaders, mobilisation teams and role models. We conclude that scale-up has no magic bullet solution - implementers must embrace multiple activities, and require substantial support from donors and governments in doing so. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Event-driven processing for hardware-efficient neural spike sorting
NASA Astrophysics Data System (ADS)
Liu, Yan; Pereira, João L.; Constandinou, Timothy G.
2018-02-01
Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.
Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).
Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando
2014-07-02
Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Decoupling processes and scales of shoreline morphodynamics
Hapke, Cheryl J.; Plant, Nathaniel G.; Henderson, Rachel E.; Schwab, William C.; Nelson, Timothy R.
2016-01-01
Behavior of coastal systems on time scales ranging from single storm events to years and decades is controlled by both small-scale sediment transport processes and large-scale geologic, oceanographic, and morphologic processes. Improved understanding of coastal behavior at multiple time scales is required for refining models that predict potential erosion hazards and for coastal management planning and decision-making. Here we investigate the primary controls on shoreline response along a geologically-variable barrier island on time scales resolving extreme storms and decadal variations over a period of nearly one century. An empirical orthogonal function analysis is applied to a time series of shoreline positions at Fire Island, NY to identify patterns of shoreline variance along the length of the island. We establish that there are separable patterns of shoreline behavior that represent response to oceanographic forcing as well as patterns that are not explained by this forcing. The dominant shoreline behavior occurs over large length scales in the form of alternating episodes of shoreline retreat and advance, presumably in response to storms cycles. Two secondary responses include long-term response that is correlated to known geologic variations of the island and the other reflects geomorphic patterns with medium length scale. Our study also includes the response to Hurricane Sandy and a period of post-storm recovery. It was expected that the impacts from Hurricane Sandy would disrupt long-term trends and spatial patterns. We found that the response to Sandy at Fire Island is not notable or distinguishable from several other large storms of the prior decade.
The Atwood machine: Two special cases
NASA Astrophysics Data System (ADS)
West, Joseph O.; Weliver, Barry N.
1999-02-01
The effects of the variation of Earth's gravitational field on a simple Atwood's machine with identical masses is considered. From rest, the time required for one of the masses to reach the ground is independent of the scale of the problem.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Self-acceleration in scalar-bimetric theories
NASA Astrophysics Data System (ADS)
Brax, Philippe; Valageas, Patrick
2018-05-01
We describe scalar-bimetric theories where the dynamics of the Universe are governed by two separate metrics, each with an Einstein-Hilbert term. In this setting, the baryonic and dark matter components of the Universe couple to metrics which are constructed as functions of these two gravitational metrics. More precisely, the two metrics coupled to matter are obtained by a linear combination of their vierbeins, with scalar-dependent coefficients. The scalar field, contrary to dark-energy models, does not have a potential of which the role is to mimic a late-time cosmological constant. The late-time acceleration of the expansion of the Universe can be easily obtained at the background level in these models by appropriately choosing the coupling functions appearing in the decomposition of the vierbeins for the baryonic and dark matter metrics. We explicitly show how the concordance model can be retrieved with negligible scalar kinetic energy. This requires the scalar coupling functions to show variations of order unity during the accelerated expansion era. This leads in turn to deviations of order unity for the effective Newton constants and a fifth force that is of the same order as Newtonian gravity, with peculiar features. The baryonic and dark matter self-gravities are amplified although the gravitational force between baryons and dark matter is reduced and even becomes repulsive at low redshift. This slows down the growth of baryonic density perturbations on cosmological scales, while dark matter perturbations are enhanced. These scalar-bimetric theories have a perturbative cutoff scale of the order of 1 AU, which prevents a precise comparison with Solar System data. On the other hand, we can deduce strong requirements on putative UV completions by analyzing the stringent constraints in the Solar System. Hence, in our local environment, the upper bound on the time evolution of Newton's constant requires an efficient screening mechanism that both damps the fifth force on small scales and decouples the local value of Newton constant from its cosmological value. This cannot be achieved by a quasistatic chameleon mechanism and requires going beyond the quasistatic regime and probably using derivative screenings, such as Kmouflage or Vainshtein screening, on small scales.
NASA Astrophysics Data System (ADS)
Houser, Chris; Wernette, Phil; Weymer, Bradley A.
2018-02-01
The impact of storm surge on a barrier island tends to be considered from a single cross-shore dimension, dependent on the relative elevations of the storm surge and dune crest. However, the foredune is rarely uniform and can exhibit considerable variation in height and width at a range of length scales. In this study, LiDAR data from barrier islands in Texas and Florida are used to explore how shoreline position and dune morphology vary alongshore, and to determine how this variability is altered or reinforced by storms and post-storm recovery. Wavelet analysis reveals that a power law can approximate historical shoreline change across all scales, but that storm-scale shoreline change ( 10 years) and dune height exhibit similar scale-dependent variations at swash and surf zone scales (< 1000 m). The in-phase nature of the relationship between dune height and storm-scale shoreline change indicates that areas of greater storm-scale shoreline retreat are associated with areas of smaller dunes. It is argued that the decoupling of storm-scale and historical shoreline change at swash and surf zone scales is also associated with the alongshore redistribution of sediment and the tendency of shorelines to evolve to a more diffusive (or straight) pattern with time. The wavelet analysis of the data for post-storm dune recovery is also characterized by red noise at the smallest scales characteristic of diffusive systems, suggesting that it is possible that small-scale variations in dune height can be repaired through alongshore recovery and expansion if there is sufficient time between storms. However, the time required for dune recovery exceeds the time between storms capable of eroding and overwashing the dune. Correlation between historical shoreline retreat and the variance of the dune at swash and surf zone scales suggests that the persistence of the dune is an important control on transgression through island migration or shoreline retreat with relative sea-level rise.
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
Paul, Lorna; Coulter, Elaine H; Miller, Linda; McFadyen, Angus; Dorfman, Joe; Mattison, Paul George G
2014-09-01
To explore the effectiveness and participant experience of web-based physiotherapy for people moderately affected with Multiple Sclerosis (MS) and to provide data to establish the sample size required for a fully powered, definitive randomized controlled study. A randomized controlled pilot study. Rehabilitation centre and participants' homes. Thirty community dwelling adults moderately affected by MS (Expanded Disability Status Scale 5-6.5). Twelve weeks of individualised web-based physiotherapy completed twice per week or usual care (control). Online exercise diaries were monitored; participants were telephoned weekly by the physiotherapist and exercise programmes altered remotely by the physiotherapist as required. The following outcomes were completed at baseline and after 12 weeks; 25 Foot Walk, Berg Balance Scale, Timed Up and Go, Multiple Sclerosis Impact Scale, Leeds MS Quality of Life Scale, MS-Related Symptom Checklist and Hospital Anxiety and Depression Scale. The intervention group also completed a website evaluation questionnaire and interviews. Participants reported that website was easy to use, convenient, and motivating and would be happy to use in the future. There was no statistically significant difference in the primary outcome measure, the timed 25ft walk in the intervention group (P=0.170), or other secondary outcome measures, except the Multiple Sclerosis Impact Scale (P=0.048). Effect sizes were generally small to moderate. People with MS were very positive about web-based physiotherapy. The results suggested that 80 participants, 40 in each group, would be sufficient for a fully powered, definitive randomized controlled trial. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Parametric Study for Increasing On-Station Duration via Unconventional Aircraft Launch Approach
NASA Technical Reports Server (NTRS)
Kuhl, Christopher A.; Moses, Robert W.; Croom, Mark A.; Sandford, Stephen P.
2004-01-01
The need for better atmospheric predictions is causing the atmospheric science community to look for new ways to obtain longer, higher-resolution measurements over several diurnal cycles. The high resolution, in-situ measurements required to study many atmospheric phenomena can be achieved by an Autonomous Aerial Observation System (AAOS); however, meeting the long on-station time requirements with an aerial platform poses many challenges. Inspired by the half-scale drop test of the deployable Aerial Regional-scale Environmental Survey (ARES) Mars airplane, a study was conducted at the NASA Langley Research Center to examine the possibility of increasing on-station time by launching an airplane directly at the desired altitude. The ARES Mars airplane concept was used as a baseline for Earth atmospheric flight, and parametric analyses of fundamental configuration elements were performed to study their impact on achieving desired on-station time with this class of airplane. The concept involved lifting the aircraft from the ground to the target altitude by means of an air balloon, thereby unburdening the airplane of ascent requirements. The parameters varied in the study were aircraft wingspan, payload, fuel quantity, and propulsion system. The results show promising trends for further research into aircraft-payload design using this unconventional balloon-based launch approach.
NASA Astrophysics Data System (ADS)
Rafelski, Susanne M.; Keller, Lani C.; Alberts, Jonathan B.; Marshall, Wallace F.
2011-04-01
The degree to which diffusion contributes to positioning cellular structures is an open question. Here we investigate the question of whether diffusive motion of centrin granules would allow them to interact with the mother centriole. The role of centrin granules in centriole duplication remains unclear, but some proposed functions of these granules, for example, in providing pre-assembled centriole subunits, or by acting as unstable 'pre-centrioles' that need to be captured by the mother centriole (La Terra et al 2005 J. Cell Biol. 168 713-22), require the centrin foci to reach the mother. To test whether diffusive motion could permit such interactions in the necessary time scale, we measured the motion of centrin-containing foci in living human U2OS cells. We found that these centrin foci display apparently diffusive undirected motion. Using the apparent diffusion constant obtained from these measurements, we calculated the time scale required for diffusion to capture by the mother centrioles and found that it would greatly exceed the time available in the cell cycle. We conclude that mechanisms invoking centrin foci capture by the mother, whether as a pre-centriole or as a source of components to support later assembly, would require a form of directed motility of centrin foci that has not yet been observed.
Role of the BIPM in UTC Dissemination to the Real Time User
NASA Technical Reports Server (NTRS)
Quinn, T. J.; Thomas, C.
1996-01-01
The generation and dissemination of International Atomic Time (TAI), and Coordinated Universal Time (UTC) are explicitly mentioned in the list of principal tasks of the Bureau International des Poids et Mesures (BIPM), that appears in the Compes Rendus of the the 18e Conference Generales des Poids et Measures, in 1987. These time scales are used as the ultimate reference in the most demanding scientific applications and must, therefore, be of the best metrological quality in terms of reliability, long term stability, and conformity of the scale interval with the second, the unit of time of the International System of Units. To meet these requirements, it is necessary that the readings of the atomic clocks, spread all over the world, that are used as basic timing data for TAI and UTC generation, must be combined in the most efficient way possible. In particular, to take full advantage of the quality of each contributing clock calls for observation of its performance over a sufficiently long time. At present, the computation period treats data in blocks covering two months. TAI and UTC are thus deferred-time scales that cannot be immediately available to real-time users. The BIPM can, nevertheless be of help to real-time users. The predictability of UTC is a fundamental attribute of the scale for institutions responsible for the dissemination of real-time time scales. It allows them to improve their local representations of UTC and, thus, implement a more thorough steering of the time scales diffused in real-time. With a view to improving the predicatbility of UTC, the BIPM examines in detail timing techniques and basic theories in order to propose alternative solutions for timing algorithms. This, coupled with a recent improvement of timing data, makes UTC more stable and thus, more predictable. At a more practical level, effort is being devoted to putting in place automatic procedures for reducing the time needed for data collection and treatment: monthly results are already available ten days earlier than before.
Creation of current filaments in the solar corona
NASA Technical Reports Server (NTRS)
Mikic, Z.; Schnack, D. D.; Van Hoven, G.
1989-01-01
It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.
Semihierarchical quantum repeaters based on moderate lifetime quantum memories
NASA Astrophysics Data System (ADS)
Liu, Xiao; Zhou, Zong-Quan; Hua, Yi-Lin; Li, Chuan-Feng; Guo, Guang-Can
2017-01-01
The construction of large-scale quantum networks relies on the development of practical quantum repeaters. Many approaches have been proposed with the goal of outperforming the direct transmission of photons, but most of them are inefficient or difficult to implement with current technology. Here, we present a protocol that uses a semihierarchical structure to improve the entanglement distribution rate while reducing the requirement of memory time to a range of tens of milliseconds. This protocol can be implemented with a fixed distance of elementary links and fixed requirements on quantum memories, which are independent of the total distance. This configuration is especially suitable for scalable applications in large-scale quantum networks.
Force-Induced Rupture of a DNA Duplex: From Fundamentals to Force Sensors.
Mosayebi, Majid; Louis, Ard A; Doye, Jonathan P K; Ouldridge, Thomas E
2015-12-22
The rupture of double-stranded DNA under stress is a key process in biophysics and nanotechnology. In this article, we consider the shear-induced rupture of short DNA duplexes, a system that has been given new importance by recently designed force sensors and nanotechnological devices. We argue that rupture must be understood as an activated process, where the duplex state is metastable and the strands will separate in a finite time that depends on the duplex length and the force applied. Thus, the critical shearing force required to rupture a duplex depends strongly on the time scale of observation. We use simple models of DNA to show that this approach naturally captures the observed dependence of the force required to rupture a duplex within a given time on duplex length. In particular, this critical force is zero for the shortest duplexes, before rising sharply and then plateauing in the long length limit. The prevailing approach, based on identifying when the presence of each additional base pair within the duplex is thermodynamically unfavorable rather than allowing for metastability, does not predict a time-scale-dependent critical force and does not naturally incorporate a critical force of zero for the shortest duplexes. We demonstrate that our findings have important consequences for the behavior of a new force-sensing nanodevice, which operates in a mixed mode that interpolates between shearing and unzipping. At a fixed time scale and duplex length, the critical force exhibits a sigmoidal dependence on the fraction of the duplex that is subject to shearing.
Parallel multispot smFRET analysis using an 8-pixel SPAD array
NASA Astrophysics Data System (ADS)
Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.
2012-02-01
Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.
Parallel methodology to capture cyclic variability in motored engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei
2016-07-28
Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less
An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less
Rime-, mixed- and glaze-ice evaluations of three scaling laws
NASA Technical Reports Server (NTRS)
Anderson, David N.
1994-01-01
This report presents the results of tests at NASA Lewis to evaluate three icing scaling relationships or 'laws' for an unheated model. The laws were LWC x time = constant, one proposed by a Swedish-Russian group and one used at ONERA in France. Icing tests were performed in the NASA Lewis Icing Research Tunnel (IRT) with cylinders ranging from 2.5- to 15.2-cm diameter. Reference conditions were chosen to provide rime, mixed and glaze ice. Scaled conditions were tested for several scenarios of size and velocity scaling, and the resulting ice shapes compared. For rime-ice conditions, all three of the scaling laws provided scaled ice shapes which closely matched reference ice shapes. For mixed ice and for glaze ice none of the scaling laws produced consistently good simulation of the reference ice shapes. Explanations for the observed results are proposed, and scaling issues requiring further study are identified.
Jian, Yun; Silvestri, Sonia; Brown, Jeff; Hickman, Rick; Marani, Marco
2014-01-01
An improved understanding of mosquito population dynamics under natural environmental forcing requires adequate field observations spanning the full range of temporal scales over which mosquito abundance fluctuates in natural conditions. Here we analyze a 9-year daily time series of uninterrupted observations of adult mosquito abundance for multiple mosquito species in North Carolina to identify characteristic scales of temporal variability, the processes generating them, and the representativeness of observations at different sampling resolutions. We focus in particular on Aedes vexans and Culiseta melanura and, using a combination of spectral analysis and modeling, we find significant population fluctuations with characteristic periodicity between 2 days and several years. Population dynamical modelling suggests that the observed fast fluctuations scales (2 days-weeks) are importantly affected by a varying mosquito activity in response to rapid changes in meteorological conditions, a process neglected in most representations of mosquito population dynamics. We further suggest that the range of time scales over which adult mosquito population variability takes place can be divided into three main parts. At small time scales (indicatively 2 days-1 month) observed population fluctuations are mainly driven by behavioral responses to rapid changes in weather conditions. At intermediate scales (1 to several month) environmentally-forced fluctuations in generation times, mortality rates, and density dependence determine the population characteristic response times. At longer scales (annual to multi-annual) mosquito populations follow seasonal and inter-annual environmental changes. We conclude that observations of adult mosquito populations should be based on a sub-weekly sampling frequency and that predictive models of mosquito abundance must include behavioral dynamics to separate the effects of a varying mosquito activity from actual changes in the abundance of the underlying population.
Pulsar recoil by large-scale anisotropies in supernova explosions.
Scheck, L; Plewa, T; Janka, H-Th; Kifonidis, K; Müller, E
2004-01-09
Assuming that the neutrino luminosity from the neutron star core is sufficiently high to drive supernova explosions by the neutrino-heating mechanism, we show that low-mode (l=1,2) convection can develop from random seed perturbations behind the shock. A slow onset of the explosion is crucial, requiring the core luminosity to vary slowly with time, in contrast to the burstlike exponential decay assumed in previous work. Gravitational and hydrodynamic forces by the globally asymmetric supernova ejecta were found to accelerate the remnant neutron star on a time scale of more than a second to velocities above 500 km s(-1), in agreement with observed pulsar proper motions.
Consensus time and conformity in the adaptive voter model
NASA Astrophysics Data System (ADS)
Rogers, Tim; Gross, Thilo
2013-09-01
The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.
HOLLOTRON switch for megawatt lightweight space inverters
NASA Technical Reports Server (NTRS)
Poeschel, R. L.; Goebel, D. M.; Schumacher, R. W.
1991-01-01
The feasibility of satisfying the switching requirements for a megawatt ultralight inverter system using HOLLOTRON switch technology was determined. The existing experimental switch hardware was modified to investigate a coaxial HOLLOTRON switch configuration and the results were compared with those obtained for a modified linear HOLLOTRON configuration. It was concluded that scaling the HOLLOTRON switch to the current and voltage specifications required for a megawatt converter system is indeed feasible using a modified linear configuration. The experimental HOLLOTRON switch operated at parameters comparable to the scaled coaxial HOLLOTRON. However, the linear HOLLOTRON data verified the capability for meeting all the design objectives simultaneously including current density (greater than 2 A/sq cm), voltage (5 kV), switching frequency (20 kHz), switching time (300 ns), and forward voltage drop (less than or equal to 20 V). Scaling relations were determined and a preliminary design was completed for an engineering model linear HOLLOTRON switch to meet the megawatt converter system specifications.
NASA Astrophysics Data System (ADS)
Boozer, Allen H.
2017-05-01
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive-gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scale when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. The physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.
Percolation transport theory and relevance to soil formation, vegetation growth, and productivity
NASA Astrophysics Data System (ADS)
Hunt, A. G.; Ghanbarian, B.
2016-12-01
Scaling laws of percolation theory have been applied to generate the time dependence of vegetation growth rates (both intensively managed and natural) and soil formation rates. The soil depth is thus equal to the solute vertical transport distance, the soil production function, chemical weathering rates, and C and N storage rates are all given by the time derivative of the soil depth. Approximate numerical coefficients based on the maximum flow rates in soils have been proposed, leading to a broad understanding of such processes. What is now required is an accurate understanding of the variability of the coefficients in the scaling relationships. The present abstract focuses on the scaling relationship for solute transport and soil formation. A soil formation rate relates length, x, and time, t, scales, meaning that the missing coefficient must include information about fundamental space and time scales, x0 and t0. x0 is proposed to be a fundamental mineral heterogeneity scale, i.e. a median particle diameter. to is then found from the ratio of x0 and a fundamental flow rate, v0, which is identified with the net infiltration rate. The net infiltration rate is equal to precipitation P less evapotranspiration, ET, plus run-on less run-off. Using this hypothesis, it is possible to predict soil depths and formation rates as functions of time and P - ET, and the formation rate as a function of depth, soil calcic and gypsic horizon depths as functions of P-ET. It is also possible to determine when soils are in equilibrium, and predict relationships of erosion rates and soil formation rates.
NASA Astrophysics Data System (ADS)
Kendon, Vivien M.; Cates, Michael E.; Pagonabarraga, Ignacio; Desplat, J.-C.; Bladon, Peter
2001-08-01
The late-stage demixing following spinodal decomposition of a three-dimensional symmetric binary fluid mixture is studied numerically, using a thermodynamically consistent lattice Boltzmann method. We combine results from simulations with different numerical parameters to obtain an unprecedented range of length and time scales when expressed in reduced physical units. (These are the length and time units derived from fluid density, viscosity, and interfacial tension.) Using eight large (2563) runs, the resulting composite graph of reduced domain size l against reduced time t covers 1 [less, similar] l [less, similar] 105, 10 [less, similar] t [less, similar] 108. Our data are consistent with the dynamical scaling hypothesis that l(t) is a universal scaling curve. We give the first detailed statistical analysis of fluid motion, rather than just domain evolution, in simulations of this kind, and introduce scaling plots for several quantities derived from the fluid velocity and velocity gradient fields. Using the conventional definition of Reynolds number for this problem, Re[phi] = ldl/dt, we attain values approaching 350. At Re[phi] [greater, similar] 100 (which requires t [greater, similar] 106) we find clear evidence of Furukawa's inertial scaling (l [similar] t2/3), although the crossover from the viscous regime (l [similar] t) is both broad and late (102 [less, similar] t [less, similar] 106). Though it cannot be ruled out, we find no indication that Re[phi] is self-limiting (l [similar] t1/2) at late times, as recently proposed by Grant & Elder. Detailed study of the velocity fields confirms that, for our most inertial runs, the RMS ratio of nonlinear to viscous terms in the Navier Stokes equation, R2, is of order 10, with the fluid mixture showing incipient turbulent characteristics. However, we cannot go far enough into the inertial regime to obtain a clear length separation of domain size, Taylor microscale, and Kolmogorov scale, as would be needed to test a recent ‘extended’ scaling theory of Kendon (in which R2 is self-limiting but Re[phi] not). Obtaining our results has required careful steering of several numerical control parameters so as to maintain adequate algorithmic stability, efficiency and isotropy, while eliminating unwanted residual diffusion. (We argue that the latter affects some studies in the literature which report l [similar] t2/3 for t [less, similar] 104.) We analyse the various sources of error and find them just within acceptable levels (a few percent each) in most of our datasets. To bring these under significantly better control, or to go much further into the inertial regime, would require much larger computational resources and/or a breakthrough in algorithm design.
Ducharme, Scott W; Liddy, Joshua J; Haddad, Jeffrey M; Busa, Michael A; Claxton, Laura J; van Emmerik, Richard E A
2018-04-01
Human locomotion is an inherently complex activity that requires the coordination and control of neurophysiological and biomechanical degrees of freedom across various spatiotemporal scales. Locomotor patterns must constantly be altered in the face of changing environmental or task demands, such as heterogeneous terrains or obstacles. Variability in stride times occurring at short time scales (e.g., 5-10 strides) is statistically correlated to larger fluctuations occurring over longer time scales (e.g., 50-100 strides). This relationship, known as fractal dynamics, is thought to represent the adaptive capacity of the locomotor system. However, this has not been tested empirically. Thus, the purpose of this study was to determine if stride time fractality during steady state walking associated with the ability of individuals to adapt their gait patterns when locomotor speed and symmetry are altered. Fifteen healthy adults walked on a split-belt treadmill at preferred speed, half of preferred speed, and with one leg at preferred speed and the other at half speed (2:1 ratio asymmetric walking). The asymmetric belt speed condition induced gait asymmetries that required adaptation of locomotor patterns. The slow speed manipulation was chosen in order to determine the impact of gait speed on stride time fractal dynamics. Detrended fluctuation analysis was used to quantify the correlation structure, i.e., fractality, of stride times. Cross-correlation analysis was used to measure the deviation from intended anti-phasing between legs as a measure of gait adaptation. Results revealed no association between unperturbed walking fractal dynamics and gait adaptability performance. However, there was a quadratic relationship between perturbed, asymmetric walking fractal dynamics and adaptive performance during split-belt walking, whereby individuals who exhibited fractal scaling exponents that deviated from 1/f performed the poorest. Compared to steady state preferred walking speed, fractal dynamics increased closer to 1/f when participants were exposed to asymmetric walking. These findings suggest there may not be a relationship between unperturbed preferred or slow speed walking fractal dynamics and gait adaptability. However, the emergent relationship between asymmetric walking fractal dynamics and limb phase adaptation may represent a functional reorganization of the locomotor system (i.e., improved interactivity between degrees of freedom within the system) to be better suited to attenuate externally generated perturbations at various spatiotemporal scales. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Pierrehumbert, R. T.; Eshel, G.
2015-08-01
An analysis of the climate impact of various forms of beef production is carried out, with a particular eye to the comparison between systems relying primarily on grasses grown in pasture (‘grass-fed’ or ‘pastured’ beef) and systems involving substantial use of manufactured feed requiring significant external inputs in the form of synthetic fertilizer and mechanized agriculture (‘feedlot’ beef). The climate impact is evaluated without employing metrics such as {{CO}}2{{e}} or global warming potentials. The analysis evaluates the impact at all time scales out to 1000 years. It is concluded that certain forms of pastured beef production have substantially lower climate impact than feedlot systems. However, pastured systems that require significant synthetic fertilization, inputs from supplemental feed, or deforestation to create pasture, have substantially greater climate impact at all time scales than the feedlot and dairy-associated systems analyzed. Even the best pastured system analyzed has enough climate impact to justify efforts to limit future growth of beef production, which in any event would be necessary if climate and other ecological concerns were met by a transition to primarily pasture-based systems. Alternate mitigation options are discussed, but barring unforseen technological breakthroughs worldwide consumption at current North American per capita rates appears incompatible with a 2 °C warming target.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
User requirements for project-oriented remote sensing
NASA Technical Reports Server (NTRS)
Hitchcock, H. C.; Baxter, F. P.; Cox, T. L.
1975-01-01
Registration of remotely sensed data to geodetic coordinates provides for overlay analysis of land use data. For aerial photographs of a large area, differences in scales, dates, and film types are reconciled, and multispectral scanner data are machine registered at the time of acquisition.
EPIDEMIOLOGY IN RISK ASSESSMENT FOR REGULATORY POLICY
Epidemiology and risk assessment have several of the features needed to make the difficult decisions required in setting standards for levels of toxic agents in the workplace and environment. hey differ in their aims, orientation, and time scale. While the distribution of disease...
76 FR 50881 - Required Scale Tests
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... required scale tests. Those documents defined ``limited seasonal basis'' incorrectly. This document... 20, 2011 (76 FR 3485) and on April 4, 2011 (76 FR 18348), concerning required scale tests. Those...
NASA Technical Reports Server (NTRS)
Griffin, P. R.; Motakef, S.
1989-01-01
Consideration is given to the influence of temporal variations in the magnitude of gravity on natural convection during unidirectional solidification of semiconductors. It is shown that the response time to step changes in g at low Rayleigh numbers is controlled by the momentum diffusive time scale. At higher Rayleigh numbers, the response time to increases in g is reduced because of inertial effects. The degree of perturbation of flow fields by transients in the gravitational acceleration on the Space Shuttle and the Space Station is determined. The analysis is used to derive the requirements for crystal growth experiments conducted on low duration low-g vehicles. Also, the effectiveness of sounding rockets and KC-135 aircraft for microgravity experiments is examined.
An evaluation of the precision of fin ray, otolith, and scale age determinations for brook trout
Stolarski, J.T.; Hartman, K.J.
2008-01-01
The ages of brook trout Salvelinus fontinalis are typically estimated using scales despite a lack of research documenting the effectiveness of this technique. The use of scales is often preferred because it is nonlethal and is believed to require less effort than alternative methods. To evaluate the relative effectiveness of different age estimation methodologies for brook trout, we measured the precision and processing times of scale, sagittal otolith, and pectoral fin ray age estimation techniques. Three independent readers, age bias plots, coefficients of variation (CV = 100 x SD/mean), and percent agreement (PA) were used to measure within-reader, among-structure bias and within-structure, among-reader precision. Bias was generally minimal; however, the age estimates derived from scales tended to be lower than those derived from otoliths within older (age > 2) cohorts. Otolith, fin ray, and scale age estimates were within 1 year of each other for 95% of the comparisons. The measures of precision for scales (CV = 6.59; PA = 82.30) and otoliths (CV = 7.45; PA = 81.48) suggest higher agreement between these structures than with fin rays (CV = 11.30; PA = 65.84). The mean per-sample processing times were lower for scale (13.88 min) and otolith techniques (12.23 min) than for fin ray techniques (22.68 min). The comparable processing times of scales and otoliths contradict popular belief and are probably a result of the high proportion of regenerated scales within samples and the ability to infer age from whole (as opposed to sectioned) otoliths. This research suggests that while scales produce age estimates rivaling those of otoliths for younger (age > 3) cohorts, they may be biased within older cohorts and therefore should be used with caution. ?? Copyright by the American Fisheries Society 2008.
Linkages between terrestrial ecosystems and the atmosphere
NASA Technical Reports Server (NTRS)
Bretherton, Francis; Dickinson, Robert E.; Fung, Inez; Moore, Berrien, III; Prather, Michael; Running, Steven W.; Tiessen, Holm
1992-01-01
The primary research issue in understanding the role of terrestrial ecosystems in global change is analyzing the coupling between processes with vastly differing rates of change, from photosynthesis to community change. Representing this coupling in models is the central challenge to modeling the terrestrial biosphere as part of the earth system. Terrestrial ecosystems participate in climate and in the biogeochemical cycles on several temporal scales. Some of the carbon fixed by photosynthesis is incorporated into plant tissue and is delayed from returning to the atmosphere until it is oxidized by decomposition or fire. This slower (i.e., days to months) carbon loop through the terrestrial component of the carbon cycle, which is matched by cycles of nutrients required by plants and decomposers, affects the increasing trend in atmospheric CO2 concentration and imposes a seasonal cycle on that trend. Moreover, this cycle includes key controls over biogenic trace gas production. The structure of terrestrial ecosystems, which responds on even longer time scales (annual to century), is the integrated response to the biogeochemical and environmental constraints that develop over the intermediate time scale. The loop is closed back to the climate system since it is the structure of ecosystems, including species composition, that sets the terrestrial boundary condition in the climate system through modification of surface roughness, albedo, and, to a great extent, latent heat exchange. These separate temporal scales contain explicit feedback loops which may modify ecosystem dynamics and linkages between ecosystems and the atmosphere. The long-term change in climate, resulting from increased atmospheric concentrations of greenhouse gases (e.g., CO2, CH4, and nitrous oxide (N2O)) will further modify the global environment and potentially induce further ecosystem change. Modeling these interactions requires coupling successional models to biogeochemical models to physiological models that describe the exchange of water, energy, and biogenic trace gases between the vegetation and the atmosphere at fine time scales. There does not appear to be any obvious way to allow direct reciprocal coupling of atmospheric general circulation models (GCM's), which inherently run with fine time steps, to ecosystem or successional models, which have coarse temporal resolution, without the interposition of physiological canopy models. This is equally true for biogeochemical models of the exchange of carbon dioxide and trace gases. This coupling across time scales is nontrivial and sets the focus for the modeling strategy.
Ocean OSSEs: recent developments and future challenges
NASA Astrophysics Data System (ADS)
Kourafalou, V. H.
2012-12-01
Atmospheric OSSEs have had a much longer history of applications than OSSEs (and OSEs) in oceanography. Long standing challenges include the presence of coastlines and steep bathymetric changes, which require the superposition of a wide variety of space and time scales, leading to difficulties on ocean observation and prediction. For instance, remote sensing is critical for providing a quasi-synoptic oceanographic view, but the coverage is limited at the ocean surface. Conversely, in situ measurements are capable to monitor the entire water column, but at a single location and usually for a specific, short time. Despite these challenges, substantial progress has been made in recent years and international initiatives have provided successful OSSE/OSE examples and formed appropriate forums that helped define the future roadmap. These will be discussed, together with various challenges that require a community effort. Examples include: integrated (remote and in situ) observing system requirements for monitoring large scale and climatic changes, vs. short term variability that is particularly important on the regional and coastal spatial scales; satisfying the needs of both global and regional/coastal nature runs, from development to rigorous evaluation and under a clear definition of metrics; data assimilation in the presence of tides; estimation of real-time river discharges for Earth system modeling. An overview of oceanographic efforts that complement the standard OSSE methodology will also be given. These include ocean array design methods, such as representer-based analysis and adaptive sampling. Exciting new opportunities for both global and regional ocean OSSE/OSE studies have recently become possible with targeted periods of comprehensive data sets, such as the existing Gulf of Mexico observations from multiple sources in the aftermath of the DeepWater Horizon incident and the upcoming airborne AirSWOT, in preparation for the SWOT (Surface Water and Ocean Topography) mission.
Reconstructions of solar irradiance on centennial time scales
NASA Astrophysics Data System (ADS)
Krivova, Natalie; Solanki, Sami K.; Dasi Espuig, Maria; Kok Leng, Yeo
Solar irradiance is the main external source of energy to Earth's climate system. The record of direct measurements covering less than 40 years is too short to study solar influence on Earth's climate, which calls for reconstructions of solar irradiance into the past with the help of appropriate models. An obvious requirement to a competitive model is its ability to reproduce observed irradiance changes, and a successful example of such a model is presented by the SATIRE family of models. As most state-of-the-art models, SATIRE assumes that irradiance changes on time scales longer than approximately a day are caused by the evolving distribution of dark and bright magnetic features on the solar surface. The surface coverage by such features as a function of time is derived from solar observations. The choice of these depends on the time scale in question. Most accurate is the version of the model that employs full-disc spatially-resolved solar magnetograms and reproduces over 90% of the measured irradiance variation, including the overall decreasing trend in the total solar irradiance over the last four cycles. Since such magnetograms are only available for about four decades, reconstructions on time scales of centuries have to rely on disc-integrated proxies of solar magnetic activity, such as sunspot areas and numbers. Employing a surface flux transport model and sunspot observations as input, we have being able to produce synthetic magnetograms since 1700. This improves the temporal resolution of the irradiance reconstructions on centennial time scales. The most critical aspect of such reconstructions remains the uncertainty in the magnitude of the secular change.
Brown, Karen A; Fenn, JoAnn P; Freeman, Vicki S; Fisher, Patrick B; Genzen, Jonathan R; Goodyear, Nancy; Houston, Mary Lunz; O'Brien, Mary Elizabeth; Tanabe, Patricia A
2015-01-01
Research in several professional fields has demonstrated that delays (time lapse) in taking certification examinations may result in poorer performance by examinees. Thirteen states and/or territories require licensure for laboratory personnel. A core component of licensure is passing a certification exam. Also, many facilities in states that do not require licensure require certification for employment or preferentially hire certified individuals. To analyze examinee performance on the American Society for Clinical Pathology (ASCP) Board of Certification (BOC) Medical Laboratory Scientist (MLS) and Medical Laboratory Technician (MLT) certification examinations to determine whether delays in taking the examination from the time of program completion are associated with poorer performance. We obtained examination data from April 2013 through December 2014 to look for changes in mean (SD) exam scaled scores and overall pass/fail rates. First-time examinees (MLS: n = 6037; MLT, n = 3920) were divided into 3-month categories based on the interval of time between date of program completion and taking the certification exam. We observed significant decreases in mean (SD) scaled scores and pass rates after the first quarter in MLS and MLT examinations for applicants who delayed taking their examination until the second, third, and fourth quarter after completing their training programs. Those who take the ASCP BOC MLS and MLT examinations are encouraged to do so shortly after completion of their educational training programs. Delays in taking an exam are generally not beneficial to the examinee and result in poorer performance on the exam. Copyright© by the American Society for Clinical Pathology (ASCP).
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
NASA Astrophysics Data System (ADS)
Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.
2018-05-01
Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.
Energy-Efficient Scheduling for Hybrid Tasks in Control Devices for the Internet of Things
Gao, Zhigang; Wu, Yifan; Dai, Guojun; Xia, Haixia
2012-01-01
In control devices for the Internet of Things (IoT), energy is one of the critical restriction factors. Dynamic voltage scaling (DVS) has been proved to be an effective method for reducing the energy consumption of processors. This paper proposes an energy-efficient scheduling algorithm for IoT control devices with hard real-time control tasks (HRCTs) and soft real-time tasks (SRTs). The main contribution of this paper includes two parts. First, it builds the Hybrid tasks with multi-subtasks of different function Weight (HoW) task model for IoT control devices. HoW describes the structure of HRCTs and SRTs, and their properties, e.g., deadlines, execution time, preemption properties, and energy-saving goals, etc. Second, it presents the Hybrid Tasks' Dynamic Voltage Scaling (HTDVS) algorithm. HTDVS first sets the slowdown factors of subtasks while meeting the different real-time requirements of HRCTs and SRTs, and then dynamically reclaims, reserves, and reuses the slack time of the subtasks to meet their ideal energy-saving goals. Experimental results show HTDVS can reduce energy consumption about 10%–80% while meeting the real-time requirements of HRCTs, HRCTs help to reduce the deadline miss ratio (DMR) of systems, and HTDVS has comparable performance with the greedy algorithm and is more favorable to keep the subtasks' ideal speeds. PMID:23112659
Direct measurement of local material properties within living embryonic tissues
NASA Astrophysics Data System (ADS)
Serwane, Friedhelm; Mongera, Alessandro; Rowghanian, Payam; Kealhofer, David; Lucio, Adam; Hockenbery, Zachary; Campàs, Otger
The shaping of biological matter requires the control of its mechanical properties across multiple scales, ranging from single molecules to cells and tissues. Despite their relevance, measurements of the mechanical properties of sub-cellular, cellular and supra-cellular structures within living embryos pose severe challenges to existing techniques. We have developed a technique that uses magnetic droplets to measure the mechanical properties of complex fluids, including in situ and in vivo measurements within living embryos ,across multiple length and time scales. By actuating the droplets with magnetic fields and recording their deformation we probe the local mechanical properties, at any length scale we choose by varying the droplets' diameter. We use the technique to determine the subcellular mechanics of individual blastomeres of zebrafish embryos, and bridge the gap to the tissue scale by measuring the local viscosity and elasticity of zebrafish embryonic tissues. Using this technique, we show that embryonic zebrafish tissues are viscoelastic with a fluid-like behavior at long time scales. This technique will enable mechanobiology and mechano-transduction studies in vivo, including the study of diseases correlated with tissue stiffness, such as cancer.
Intelligent control of neurosurgical robot MM-3 using dynamic motion scaling.
Ko, Sunho; Nakazawa, Atsushi; Kurose, Yusuke; Harada, Kanako; Mitsuishi, Mamoru; Sora, Shigeo; Shono, Naoyuki; Nakatomi, Hirofumi; Saito, Nobuhito; Morita, Akio
2017-05-01
OBJECTIVE Advanced and intelligent robotic control is necessary for neurosurgical robots, which require great accuracy and precision. In this article, the authors propose methods for dynamically and automatically controlling the motion-scaling ratio of a master-slave neurosurgical robotic system to reduce the task completion time. METHODS Three dynamic motion-scaling modes were proposed and compared with the conventional fixed motion-scaling mode. These 3 modes were defined as follows: 1) the distance between a target point and the tip of the slave manipulator, 2) the distance between the tips of the slave manipulators, and 3) the velocity of the master manipulator. Five test subjects, 2 of whom were neurosurgeons, sutured 0.3-mm artificial blood vessels using the MM-3 neurosurgical robot in each mode. RESULTS The task time, total path length, and helpfulness score were evaluated. Although no statistically significant differences were observed, the mode using the distance between the tips of the slave manipulators improves the suturing performance. CONCLUSIONS Dynamic motion scaling has great potential for the intelligent and accurate control of neurosurgical robots.
NASA Astrophysics Data System (ADS)
Kelling, S.
2017-12-01
The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.
NASA Technical Reports Server (NTRS)
Megie, G.; Chanin, M.-L.; Ehhalt, D.; Fraser, P.; Frederick, J. F.; Gille, J. C.; Mccormick, M. P.; Schoebert, M.; Bishop, L.; Bojkov, R. D.
1990-01-01
Measuring trends in ozone, and most other geophysical variables, requires that a small systematic change with time be determined from signals that have large periodic and aperiodic variations. Their time scales range from the day-to-day changes due to atmospheric motions through seasonal and annual variations to 11 year cycles resulting from changes in the sun UV output. Because of the magnitude of all of these variations is not well known and highly variable, it is necessary to measure over more than one period of the variations to remove their effects. This means that at least 2 or more times the 11 year sunspot cycle. Thus, the first requirement is for a long term data record. The second related requirement is that the record be consistent. A third requirement is for reasonable global sampling, to ensure that the effects are representative of the entire Earth. The various observational methods relevant to trend detection are reviewed to characterize their quality and time and space coverage. Available data are then examined for long term trends or recent changes in ozone total content and vertical distribution, as well as related parameters such as stratospheric temperature, source gases and aerosols.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.
Lift and Power Required for Flapping Wing Hovering Flight on Mars
NASA Astrophysics Data System (ADS)
Pohly, Jeremy; Sridhar, Madhu; Bluman, James; Kang, Chang-Kwon; Landrum, D. Brian; Fahimi, Farbod; Aono, Hikaru; Liu, Hao
2017-11-01
Achieving flight on Mars is challenging due to the ultra-low density atmosphere. Bio-inspired flapping motion can generate sufficient lift if bumblebee-inspired wings are scaled up between 2 and 4 times their nominal size. However, due to this scaling, the inertial power required to sustain hover increases and dominates over the aerodynamic power. Our results show that a torsional spring placed at the wing root can reduce the flapping power required for hover by efficiently storing and releasing energy while operating at its resonance frequency. The spring assisted reduction in flapping power is demonstrated with a well-validated, coupled Navier-Stokes and flight dynamics solver. The total power is reduced by 79%, whereas the flapping power is reduced by 98%. Such a reduction in power paves the way for an efficient, realizable micro air vehicle capable of vertical takeoff and landing as well as sustained flight on Mars. Alabama Space Grant Consortium Fellowship.
Simulated quantum computation of molecular energies.
Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin
2005-09-09
The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.
Precision agriculture in large-scale mechanized farming
USDA-ARS?s Scientific Manuscript database
Precision agriculture involves a great deal of technologies and requires additional investments of money and time, but it can be practiced at different levels depending on the specific field and crop conditions and the resources and technology services available to the farmer. If practiced properly,...
Agroforestry, climate change, and food security
USDA-ARS?s Scientific Manuscript database
Successfully addressing global climate change effects on agriculture will require a holistic, sustained approach incorporating a suite of strategies at multiple spatial scales and time horizons. In the USA of the 1930’s, bold and innovative leadership at high levels of government was needed to enact...
Where does fitness fit in theories of perception?
Anderson, Barton L
2015-12-01
Interface theory asserts that neither our perceptual experience of the world nor the scientific constructs used to describe the world are veridical. The primary argument used to uphold this claim is that (1) evolution is driven by a process of natural selection that favors fitness over veridicality, and (2) payoffs do not vary monotonically with truth. I argue that both the arguments used to bolster this claim and the conclusions derived from it are flawed. Interface theory assumes that perception evolved to directly track fitness but fails to consider the role of adaptation on ontogenetic time scales. I argue that the ubiquity of nonmonotonic payoff functions requires that (1) perception tracks "truth" for species that adapt on ontogenetic time scales and (2) that perception should be distinct from utility. These conditions are required to pursue an adaptive strategy to mitigate homeostatic imbalances. I also discuss issues with the interface metaphor, the particular formulation of veridicality that is considered, and the relationship of interface theory to the history of ideas on these topics.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Clarke, Leon E.; Edmonds, James A.
Electrification plays a crucial role in cost-effective greenhouse gas emissions mitigation strategies. Such strategies in turn carry implications for financial capital markets. This paper explores the implication of climate mitigation policy for capital investment demands by the electric power sector on decade to century time scales. We go further to explore the implications of technology performance and the stringency of climate policy for capital investment demands by the power sector. Finally, we discuss the regional distribution of investment demands. We find that stabilizing GHG emissions will require additional investment in the electricity generation sector over and above investments that wouldmore » be need in the absence of climate policy, in the range of 16 to 29 Trillion US$ (60-110%) depending on the stringency of climate policy during the period 2015 to 2095 under default technology assumptions. This increase reflects the higher capital intensity of power systems that control emissions. Limits on the penetration of nuclear and carbon capture and storage technology could increase costs substantially. Energy efficiency improvements can reduce the investment requirement by 8 to21 Trillion US$ (default technology assumptions), depending on climate policy scenario with higher savings being obtained under the most stringent climate policy. The heaviest investments in power generation were observed in the China, India, SE Asia and Africa regions with the latter three regions dominating in the second half of the 21st century.« less
2015-11-24
spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06
LIDAR and acoustics applications to ocean productivity
NASA Technical Reports Server (NTRS)
Collins, D. J.
1982-01-01
The requirements for the submersible, the instrumentation necessary to perform these measurements, and the optical and acoustical technology required to develop the ocean color scanner instrumentation are described. The development of a second generation ocean color scanner produced the need for coincident in situ scientific measurements which examine the primary productivity of the upper ocean on time and space scales which are large compared to the environmental scales. The vertical and horizontal variability of the biota, including the relationship between chlorophyll and primary productivity, the productivity of zooplankton, and the dynamic interaction between phytoplankton and zooplankton, and between these populations and the physical environment are investigated. A towed submersible will be constructed which accommodates both an underwater LIDAR instrument and a multifrequency sonar.
Shoreline Position Dynamics: Measurement and Analysis
NASA Astrophysics Data System (ADS)
Barton, C. C.; Rigling, B.; Hunter, N.; Tebbens, S. F.
2012-12-01
The dynamics of sandy shoreline position is a fundamental property of complex beach face processes and is characterized by the power scaling exponent. Spectral analysis was performed on the temporal position of four sandy shorelines extracted from four shore perpendicular profiles each resurveyed approximately seven times per year over twenty-seven years at the Field Research Facility (FRF) by the U.S. Army Corps of Engineers, located at Kitty Hawk, NC. The four shorelines we studied are mean-higher-high-water (MHHW), mean-high-water (MHW), and mean-low-water (MLW) and mean-lower-low-water (MLLW) with elevations of 0.75m, 0.65m, -0.33m, and -0.37m respectively, relative to the NGVD29 geodetic datum. Spectral analysis used to quantify scaling exponents requires data evenly spaced in time. Our previous studies of shoreline dynamics used the Lomb Periodogram method for spectral analysis, which we now show does not return the correct scaling exponent for unevenly spaced data. New to this study is the use of slotted resampling and a linear predictor to construct an evenly spaced data set from an unevenly spaced data set which has been shown with synthetic data to return correct values of the scaling exponents. A periodogram linear regression (PLR) estimate is used to determine the scaling exponent β of the constructed evenly spaced time series. This study shows that sandy shoreline position exhibits nonlinear self-affine dynamics through time. The times series of each of the four shorelines has scaling exponents ranging as follows: MHHW, β = 1.3-2.2; MHW, β = 1.3-2.1; MLW, β = 1.2-1.6; and MLLW, β = 1.2-1.6. Time series with β greater than 1 are non-stationary (mean and standard deviation are not constant through time) and are increasingly internally correlated with increasing β. The range of scaling exponents of the MLW and MLLW shorelines, near β = 1.5, is indicative of a diffusion process. The range of scaling exponents for the MHW and MHHW shorelines indicates spatially variable dynamics higher on the beach face.
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Diffuse pollution of soil and water: Long term trends at large scales?
NASA Astrophysics Data System (ADS)
Grathwohl, P.
2012-04-01
Industrialization and urbanization, which consequently increased pressure on the environment to cause degradation of soil and water quality over more than a century, is still ongoing. The number of potential environmental contaminants detected in surface and groundwater is continuously increasing; from classical industrial and agricultural chemicals, to flame retardants, pharmaceuticals, and personal care products. While point sources of pollution can be managed in principle, diffuse pollution is only reversible at very long time scales if at all. Compounds which were phased out many decades ago such as PCBs or DDT are still abundant in soils, sediments and biota. How diffuse pollution is processed at large scales in space (e.g. catchments) and time (centuries) is unknown. The relevance to the field of processes well investigated at the laboratory scale (e.g. sorption/desorption and (bio)degradation kinetics) is not clear. Transport of compounds is often coupled to the water cycle and in order to assess trends in diffuse pollution, detailed knowledge about the hydrology and the solute fluxes at the catchment scale is required (e.g. input/output fluxes, transformation rates at the field scale). This is also a prerequisite in assessing management options for reversal of adverse trends.
Consistency of near-death experience accounts over two decades: are reports embellished over time?
Greyson, Bruce
2007-06-01
"Near-death experiences," commonly reported after clinical death and resuscitation, may require intervention and, if reliable, may elucidate altered brain functioning under extreme stress. It has been speculated that accounts of near-death experiences are exaggerated over the years. The objective of this study was to test the reliability over two decades of accounts of near-death experiences. Seventy-two patients with near-death experience who had completed the NDE scale in the 1980s (63% of the original cohort still alive) completed the scale a second time, without reference to the original scale administration. The primary outcome was differences in NDE scale scores on the two administrations. The secondary outcome was the statistical association between differences in scores and years elapsed between the two administrations. Mean scores did not change significantly on the total NDE scale, its 4 factors, or its 16 items. Correlation coefficients between scores on the two administrations were significant at P<0.001 for the total NDE scale, for its 4 factors, and for its 16 items. Correlation coefficients between score changes and time elapsed between the two administrations were not significant for the total NDE scale, for its 4 factors, or for its 16 items. Contrary to expectation, accounts of near-death experiences, and particularly reports of their positive affect, were not embellished over a period of almost two decades. These data support the reliability of near-death experience accounts.
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Covering Resilience: A Recent Development for Binomial Checkpointing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walther, Andrea; Narayanan, Sri Hari Krishna
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulationsmore » and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.« less
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
NASA Astrophysics Data System (ADS)
Philipp, Andy; Kerl, Florian; Büttner, Uwe; Metzkes, Christine; Singer, Thomas; Wagner, Michael; Schütze, Niels
2016-05-01
In recent years, the Free State of Saxony (Eastern Germany) was repeatedly hit by both extensive riverine flooding, as well as flash flood events, emerging foremost from convective heavy rainfall. Especially after a couple of small-scale, yet disastrous events in 2010, preconditions, drivers, and methods for deriving flash flood related early warning products are investigated. This is to clarify the feasibility and the limits of envisaged early warning procedures for small catchments, hit by flashy heavy rain events. Early warning about potentially flash flood prone situations (i.e., with a suitable lead time with regard to required reaction-time needs of the stakeholders involved in flood risk management) needs to take into account not only hydrological, but also meteorological, as well as communication issues. Therefore, we propose a threefold methodology to identify potential benefits and limitations in a real-world warning/reaction context. First, the user demands (with respect to desired/required warning products, preparation times, etc.) are investigated. Second, focusing on small catchments of some hundred square kilometers, two quantitative precipitation forecasts are verified. Third, considering the user needs, as well as the input parameter uncertainty (i.e., foremost emerging from an uncertain QPF), a feasible, yet robust hydrological modeling approach is proposed on the basis of pilot studies, employing deterministic, data-driven, and simple scoring methods.
Medical physics staffing for radiation oncology: a decade of experience in Ontario, Canada
Battista, Jerry J.; Patterson, Michael S.; Beaulieu, Luc; Sharpe, Michael B.; Schreiner, L. John; MacPherson, Miller S.; Van Dyk, Jacob
2012-01-01
The January 2010 articles in The New York Times generated intense focus on patient safety in radiation treatment, with physics staffing identified frequently as a critical factor for consistent quality assurance. The purpose of this work is to review our experience with medical physics staffing, and to propose a transparent and flexible staffing algorithm for general use. Guided by documented times required per routine procedure, we have developed a robust algorithm to estimate physics staffing needs according to center‐specific workload for medical physicists and associated support staff, in a manner we believe is adaptable to an evolving radiotherapy practice. We calculate requirements for each staffing type based on caseload, equipment inventory, quality assurance, educational programs, and administration. Average per‐case staffing ratios were also determined for larger‐scale human resource planning and used to model staffing needs for Ontario, Canada over the next 10 years. The workload specific algorithm was tested through a survey of Canadian cancer centers. For center‐specific human resource planning, we propose a grid of coefficients addressing specific workload factors for each staff group. For larger scale forecasting of human resource requirements, values of 260, 700, 300, 600, 1200, and 2000 treated cases per full‐time equivalent (FTE) were determined for medical physicists, physics assistants, dosimetrists, electronics technologists, mechanical technologists, and information technology specialists, respectively. PACS numbers: 87.55.N‐, 87.55.Qr PMID:22231223
Medical physics staffing for radiation oncology: a decade of experience in Ontario, Canada.
Battista, Jerry J; Clark, Brenda G; Patterson, Michael S; Beaulieu, Luc; Sharpe, Michael B; Schreiner, L John; MacPherson, Miller S; Van Dyk, Jacob
2012-01-05
The January 2010 articles in The New York Times generated intense focus on patient safety in radiation treatment, with physics staffing identified frequently as a critical factor for consistent quality assurance. The purpose of this work is to review our experience with medical physics staffing, and to propose a transparent and flexible staffing algorithm for general use. Guided by documented times required per routine procedure, we have developed a robust algorithm to estimate physics staffing needs according to center-specific workload for medical physicists and associated support staff, in a manner we believe is adaptable to an evolving radiotherapy practice. We calculate requirements for each staffing type based on caseload, equipment inventory, quality assurance, educational programs, and administration. Average per-case staffing ratios were also determined for larger-scale human resource planning and used to model staffing needs for Ontario, Canada over the next 10 years. The workload specific algorithm was tested through a survey of Canadian cancer centers. For center-specific human resource planning, we propose a grid of coefficients addressing specific workload factors for each staff group. For larger scale forecasting of human resource requirements, values of 260, 700, 300, 600, 1200, and 2000 treated cases per full-time equivalent (FTE) were determined for medical physicists, physics assistants, dosimetrists, electronics technologists, mechanical technologists, and information technology specialists, respectively.
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Improved Strength and Damage Modeling of Geologic Materials
NASA Astrophysics Data System (ADS)
Stewart, Sarah; Senft, Laurel
2007-06-01
Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.
Dynamic Ocean Management Increases the Efficiency and Efficacy of Fisheries Management
NASA Astrophysics Data System (ADS)
Dunn, D. C.; Maxwell, S.; Boustany, A. M.; Halpin, P. N.
2016-12-01
In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. In this presentation, we attempt to address both the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism and social aggregations). Further, it can significantly improve the efficiency of management: as the resolution of the individual closures used increases (i.e., as the closures become more targeted) the percent of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. The coarser management measures (annual time-area closures and monthly full fishery closures) affected up to 4-5x the target catch and required 100-200x the time-area of the dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.
Floris, Patrick; Curtin, Sean; Kaisermayer, Christian; Lindeberg, Anna; Bones, Jonathan
2018-07-01
The compatibility of CHO cell culture medium formulations with all stages of the bioprocess must be evaluated through small-scale studies prior to scale-up for commercial manufacturing operations. Here, we describe the development of a bespoke small-scale device for assessing the compatibility of culture media with a widely implemented upstream viral clearance strategy, high-temperature short-time (HTST) treatment. The thermal stability of undefined medium formulations supplemented with soy hydrolysates was evaluated upon variations in critical HTST processing parameters, namely, holding times and temperatures. Prolonged holding times of 43 s at temperatures of 110 °C did not adversely impact medium quality while significant degradation was observed upon treatment at elevated temperatures (200 °C) for shorter time periods (11 s). The performance of the device was benchmarked against a commercially available mini-pilot HTST system upon treatment of identical formulations on both platforms. Processed medium samples were analyzed by untargeted LC-MS/MS for compositional profiling followed by chemometric evaluation, which confirmed the observed degradation effects caused by elevated holding temperatures but revealed comparable performance of our developed device with the commercial mini-pilot setup. The developed device can assist medium optimization activities by reducing volume requirements relative to commercially available mini-pilot instrumentation and by facilitating fast throughput evaluation of heat-induced effects on multiple medium lots.
Time dependent turbulence modeling and analytical theories of turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, R.
1993-01-01
By simplifying the direct interaction approximation (DIA) for turbulent shear flow, time dependent formulas are derived for the Reynolds stresses which can be included in two equation models. The Green's function is treated phenomenologically, however, following Smith and Yakhot, we insist on the short and long time limits required by DIA. For small strain rates, perturbative evaluation of the correlation function yields a time dependent theory which includes normal stress effects in simple shear flows. From this standpoint, the phenomenological Launder-Reece-Rodi model is obtained by replacing the Green's function by its long time limit. Eddy damping corrections to short time behavior initiate too quickly in this model; in contrast, the present theory exhibits strong suppression of eddy damping at short times. A time dependent theory for large strain rates is proposed in which large scales are governed by rapid distortion theory while small scales are governed by Kolmogorov inertial range dynamics. At short times and large strain rates, the theory closely matches rapid distortion theory, but at long times it relaxes to an eddy damping model.
Catchment dynamics and social response during flash floods
NASA Astrophysics Data System (ADS)
Creutin, J. D.; Lutoff, C.; Ruin, I.; Scolobig, A.; Créton-Cazanave, L.
2009-04-01
The objective of this study is to examine how the current techniques for flash-flood monitoring and forecasting can meet the requirements of the population at risk to evaluate the severity of the flood and anticipate its danger. To this end, we identify the social response time for different social actions in the course of two well studied flash flood events which occurred in France and Italy. We introduce a broad characterization of the event management activities into three types according to their main objective (information, organisation and protection). The activities are also classified into three other types according to the scale and nature of the human group involved (individuals, communities and institutions). The conclusions reached relate to i) the characterisation of the social responses according to watershed scale and to the information available, and ii) to the appropriateness of the existing surveillance and forecasting tools to support the social responses. Our results suggest that representing the dynamics of the social response with just one number representing the average time for warning a population is an oversimplification. It appears that the social response time exhibits a parallel with the hydrological response time, by diminishing in time with decreasing size of the relevant watershed. A second result is that the human groups have different capabilities of anticipation apparently based on the nature of information they use. Comparing watershed response times and social response times shows clearly that at scales of less than 100 km2, a number of actions were taken with response times comparable to the catchment response time. The implications for adapting the warning processes to social scales (individual or organisational scales) are considerable. At small scales and for the implied anticipation times, the reliable and high-resolution description of the actual rainfall field becomes the major source of information for decision-making processes such as deciding between evacuations or advising to stay home. This points to the need to improve the accuracy and quality control of real time radar rainfall data, especially for extreme flash flood generating storms.
An integrated assessment of location-dependent scaling for microalgae biofuel production facilities
Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...
2014-06-19
Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less
Environmental Games To Teach Concepts and Issues.
ERIC Educational Resources Information Center
Bromley, Gail
2000-01-01
Describes several games from various sources which can help in teaching about photosynthesis, pollution, pollination, plant parts, Earth history time-scale, biodiversity conservation, values, and communication. Requires little equipment and games are easy to organize and effective with various age groups ranging from primary to adult. (Author/YDS)
ERIC Educational Resources Information Center
Ackoff, Russell L.
1974-01-01
The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…
Strategic Planning Tools for Large-Scale Technology-Based Assessments
ERIC Educational Resources Information Center
Koomen, Marten; Zoanetti, Nathan
2018-01-01
Education systems are increasingly being called upon to implement new technology-based assessment systems that generate efficiencies, better meet changing stakeholder expectations, or fulfil new assessment purposes. These assessment systems require coordinated organisational effort to implement and can be expensive in time, skill and other…
Nutritional Systems Biology Modeling: From Molecular Mechanisms to Physiology
de Graaf, Albert A.; Freidig, Andreas P.; De Roos, Baukje; Jamshidi, Neema; Heinemann, Matthias; Rullmann, Johan A.C.; Hall, Kevin D.; Adiels, Martin; van Ommen, Ben
2009-01-01
The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today's important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales. PMID:19956660
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
NASA Astrophysics Data System (ADS)
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
Isotope Mass Scaling of Turbulence and Transport
NASA Astrophysics Data System (ADS)
McKee, George; Yan, Zheng; Gohil, Punit; Luce, Tim; Rhodes, Terry
2017-10-01
The dependence of turbulence characteristics and transport scaling on the fuel ion mass has been investigated in a set of hydrogen (A = 1) and deuterium (A = 2) plasmas on DIII-D. Normalized energy confinement time (B *τE) is two times lower in hydrogen (H) plasmas compare to similar deuterium (D) plasmas. Dimensionless parameters other than ion mass (A) , including ρ*, q95, Te /Ti , βN, ν*, and Mach number were maintained nearly fixed. Matched profiles of electron density, electron and ion temperature, and toroidal rotation were well matched. The normalized turbulence amplitude (ñ / n) is approximately twice as large in H as in D, which may partially explain the increased transport and reduced energy confinement time. Radial correlation lengths of low-wavenumber density turbulence in hydrogen are similar to or slightly larger than correlation lengths in the deuterium plasmas and generally scale with the ion gyroradius, which were maintained nearly fixed in this dimensionless scan. Predicting energy confinement in D-T burning plasmas requires an understanding of the large and beneficial isotope scaling of transport. Supported by USDOE under DE-FG02-08ER54999 and DE-FC02-04ER54698.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
Modeling residence-time distribution in horizontal screw hydrolysis reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sievers, David A.; Stickel, Jonathan J.
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
Modeling residence-time distribution in horizontal screw hydrolysis reactors
Sievers, David A.; Stickel, Jonathan J.
2017-10-12
The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less
Efficient hemodynamic event detection utilizing relational databases and wavelet analysis
NASA Technical Reports Server (NTRS)
Saeed, M.; Mark, R. G.
2001-01-01
Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.
NASA Astrophysics Data System (ADS)
Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.
2012-07-01
SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.
Influence of spasticity on mobility and balance in persons with multiple sclerosis.
Sosnoff, Jacob J; Gappmaier, Eduard; Frame, Amy; Motl, Robert W
2011-09-01
Spasticity is a motor disorder characterized by a velocity-dependent increase in tonic stretch reflexes that presumably affects mobility and balance. This investigation examined the hypothesis that persons with multiple sclerosis (MS) who have spasticity of the lower legs would have more impairment of mobility and balance compared to those without spasticity. Participants were 34 ambulatory persons with a definite diagnosis of MS. The expanded disability status scale (EDSS) was used to characterize disability in the study sample. All participants underwent measurements of spasticity in the gastroc-soleus muscles of both legs (modified Ashworth scale), walking speed (timed 25-foot walk), mobility (Timed Up and Go), walking endurance (6-minute walk test), self-reported impact of MS on walking ability (Multiple Sclerosis Walking Scale-12), and balance (Berg Balance Test and Activities-specific Balance Confidence Scale). Fifteen participants had spasticity of the gastroc-soleus muscles based on modified Ashworth scale scores. The spasticity group had lower median EDSS scores indicating greater disability (P=0.03). Mobility and balance were significantly more impaired in the group with spasticity compared to the group without spasticity: timed 25-foot walk (P = 0.02, d = -0.74), Timed Up and Go (P = 0.01, d = -0.84), 6-minute walk test (P < 0.01, d = 1.03), Multiple Sclerosis Walking Scale-12 (P = 0.04, d = -0.76), Berg Balance Test (P = 0.02, d = -0.84) and Activities-specific Balance Confidence Scale (P = 0.04, d = -0.59). Spasticity in the gastroc-soleus muscles appears to have negative effect on mobility and balance in persons with MS. The relationship between spasticity and disability in persons with MS requires further exploration.
Yu, Wenya; Lv, Yipeng; Hu, Chaoqun; Liu, Xu; Chen, Haiping; Xue, Chen; Zhang, Lulu
2018-01-01
Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise.
Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stenclik, Derek; Denholm, Paul; Chalamala, Babu
For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
Population dynamics in an intermittent refuge
NASA Astrophysics Data System (ADS)
Colombo, E. H.; Anteneodo, C.
2016-10-01
Population dynamics is constrained by the environment, which needs to obey certain conditions to support population growth. We consider a standard model for the evolution of a single species population density, which includes reproduction, competition for resources, and spatial spreading, while subject to an external harmful effect. The habitat is spatially heterogeneous, there existing a refuge where the population can be protected. Temporal variability is introduced by the intermittent character of the refuge. This scenario can apply to a wide range of situations, from a laboratory setting where bacteria can be protected by a blinking mask from ultraviolet radiation, to large-scale ecosystems, like a marine reserve where there can be seasonal fishing prohibitions. Using analytical and numerical tools, we investigate the asymptotic behavior of the total population as a function of the size and characteristic time scales of the refuge. We obtain expressions for the minimal size required for population survival, in the slow and fast time scale limits.
Scale models: A proven cost-effective tool for outage planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, R.; Segroves, R.
1995-03-01
As generation costs for operating nuclear stations have risen, more nuclear utilities have initiated efforts to improve cost effectiveness. Nuclear plant owners are also being challenged with lower radiation exposure limits and new revised radiation protection related regulations (10 CFR 20), which places further stress on their budgets. As source term reduction activities continue to lower radiation fields, reducing the amount of time spent in radiation fields becomes one of the most cost-effective ways of reducing radiation exposure. An effective approach for minimizing time spent in radiation areas is to use a physical scale model for worker orientation planning andmore » monitoring maintenance, modifications, and outage activities. To meet the challenge of continued reduction in the annual cumulative radiation exposures, new cost-effective tools are required. One field-tested and proven tool is the physical scale model.« less
On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains
NASA Astrophysics Data System (ADS)
de Wekker, Stephan F. J.; Whiteman, C. David
2006-06-01
Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.
Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R
2017-01-21
The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
NASA Astrophysics Data System (ADS)
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
Video-Game-Like Engine for Depicting Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Upchurch, Paul R.
2009-01-01
GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Collection and processing of data from a phase-coherent meteor radar
NASA Technical Reports Server (NTRS)
Backof, C. A., Jr.; Bowhill, S. A.
1974-01-01
An analysis of the measurement accuracy requirement of a high resolution meteor radar for observing short period, atmospheric waves is presented, and a system which satisfies the requirements is described. A medium scale, real time computer is programmed to perform all echo recognition and coordinate measurement functions. The measurement algorithms are exercised on noisy data generated by a program which simulates the hardware system, in order to find the effects of noise on the measurement accuracies.
A computational theory of visual receptive fields.
Lindeberg, Tony
2013-12-01
A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.
A simple approximation for larval retention around reefs
NASA Astrophysics Data System (ADS)
Cetina-Heredia, Paulina; Connolly, Sean R.
2011-09-01
Estimating larval retention at individual reefs by local scale three-dimensional flows is a significant problem for understanding, and predicting, larval dispersal. Determining larval dispersal commonly involves the use of computationally demanding and expensively calibrated/validated hydrodynamic models that resolve reef wake eddies. This study models variation in larval retention times for a range of reef shapes and circulation regimes, using a reef-scale three-dimensional hydrodynamic model. It also explores how well larval retention time can be estimated based on the "Island Wake Parameter", a measure of the degree of flow turbulence in the wake of reefs that is a simple function of flow speed, reef dimension, and vertical diffusion. The mean residence times found in the present study (0.48-5.64 days) indicate substantial potential for self-recruitment of species whose larvae are passive, or weak swimmers, for the first several days after release. Results also reveal strong and significant relationships between the Island Wake Parameter and mean residence time, explaining 81-92% of the variability in retention among reefs across a range of unidirectional flow speeds and tidal regimes. These findings suggest that good estimates of larval retention may be obtained from relatively coarse-scale characteristics of the flow, and basic features of reef geomorphology. Such approximations may be a valuable tool for modeling connectivity and meta-population dynamics over large spatial scales, where explicitly characterizing fine-scale flows around reef requires a prohibitive amount of computation and extensive model calibration.
NASA Astrophysics Data System (ADS)
Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu
Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.
Short-Time Nonlinear Effects in the Exciton-Polariton System
NASA Astrophysics Data System (ADS)
Guevara, Cristi D.; Shipman, Stephen P.
2018-04-01
In the exciton-polariton system, a linear dispersive photon field is coupled to a nonlinear exciton field. Short-time analysis of the lossless system shows that, when the photon field is excited, the time required for that field to exhibit nonlinear effects is longer than the time required for the nonlinear Schrödinger equation, in which the photon field itself is nonlinear. When the initial condition is scaled by ɛ ^α , it is found that the relative error committed by omitting the nonlinear term in the exciton-polariton system remains within ɛ for all times up to t=Cɛ ^β , where β =(1-α (p-1))/(p+2). This is in contrast to β =1-α (p-1) for the nonlinear Schrödinger equation. The result is proved for solutions in H^s(R^n) for s>n/2. Numerical computations indicate that the results are sharp and also hold in L^2(R^n).
USDA-ARS?s Scientific Manuscript database
Understanding agricultural effects on water quality in rivers and estuaries requires understanding of hydrometeorology and geochemical cycling at various scales over time. The USDA-ARS initiated a hydrologic research program at the Mahantango Creek Watershed (MCW) in 1968, a research watershed at t...
Understanding the stopover of migratory birds: a scale dependent approach
Frank R. Moore; Mark S. Woodrey; Jeffrey J. Buler; Stefan Woltmann; Ted R. Simons
2005-01-01
The development of comprehensive conservation strategies and management plans for migratory birds depends on understanding migrant-habitat relations throughout the annual cycle, including the time when migrants stopover en route. Yet, the complexity of migration makes the assessment of habitat requirements and development of a comprehensive...
DEMONSTRATION OF A MULTI-SCALE INTEGRATED MONITORING AND ASSESSMENT IN NY/NJ HARBOR
The Clean Water Act (CWA) requires states and tribes to assess the overall quality of their waters (Sec 305(b)), determine whether that quality is changing over time, identify problem areas and management actions necessary to resolve those problems, and evaluate the effectiveness...
Making Molecular Borromean Rings
ERIC Educational Resources Information Center
Pentecost, Cari D.; Tangchaivang, Nichol; Cantrill, Stuart J.; Chichak, Kelly S.; Peters, Andrea J.; Stoddart, Fraser J.
2007-01-01
A procedure that requires seven 4-hour blocks of time to allow undergraduate students to prepare the molecular Borromean rings (BRs) on a gram-scale in 90% yield is described. The experiment would serve as a nice capstone project to culminate any comprehensive organic laboratory course and expose students to fundamental concepts, symmetry point…
High resolution pollutant measurements in complex urban environments using mobile monitoring
Measuring air pollution in real-time using an instrumented vehicle platform has been an emerging strategy to resolve air pollution trends at a very fine spatial scale (10s of meters). Achieving second-by-second data representative of urban air quality trends requires advanced in...
Microscale and Compact Scale Chemistry in South Africa
ERIC Educational Resources Information Center
Taylor, Warwick
2011-01-01
Reduced costs and greater time efficiency are often quoted among the main benefits of microscale chemistry. Do these benefits outweigh some of the limitations and difficulties faced in terms of students needing to develop new manipulation skills, and teachers requiring training in terms of implementation and management? This article describes a…
Using Longitudinal Scales Assessment for Instrumental Music Students
ERIC Educational Resources Information Center
Simon, Samuel H.
2014-01-01
In music education, current assessment trends emphasize student reflection, tracking progress over time, and formative as well as summative measures. This view of assessment requires instrumental music educators to modernize their approaches without interfering with methods that have proven to be successful. To this end, the Longitudinal Scales…
ERIC Educational Resources Information Center
Guth, Douglas J.
2017-01-01
A community college's success hinges in large part on the effectiveness of its teaching faculty, no more so than in times of major organizational change. However, any large-scale foundational shift requires institutional buy-in, with the onus on leadership to create an environment where everyone is working together toward the same endpoint.…
Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in hig...
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Structural Similitude and Scaling Laws
NASA Technical Reports Server (NTRS)
Simitses, George J.
1998-01-01
Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
NASA Astrophysics Data System (ADS)
Petruk, O.; Kopytko, B.
2016-11-01
Three approaches are considered to solve the equation which describes the time-dependent diffusive shock acceleration of test particles at the non-relativistic shocks. At first, the solution of Drury for the particle distribution function at the shock is generalized to any relation between the acceleration time-scales upstream and downstream and for the time-dependent injection efficiency. Three alternative solutions for the spatial dependence of the distribution function are derived. Then, the two other approaches to solve the time-dependent equation are presented, one of which does not require the Laplace transform. At the end, our more general solution is discussed, with a particular attention to the time-dependent injection in supernova remnants. It is shown that, comparing to the case with the dominant upstream acceleration time-scale, the maximum momentum of accelerated particles shifts towards the smaller momenta with increase of the downstream acceleration time-scale. The time-dependent injection affects the shape of the particle spectrum. In particular, (I) the power-law index is not solely determined by the shock compression, in contrast to the stationary solution; (II) the larger the injection efficiency during the first decades after the supernova explosion, the harder the particle spectrum around the high-energy cutoff at the later times. This is important, in particular, for interpretation of the radio and gamma-ray observations of supernova remnants, as demonstrated on a number of examples.
Extended-Range Forecasts at Climate Prediction Center: Current Status and Future Plans
NASA Astrophysics Data System (ADS)
Kumar, A.
2016-12-01
Motivated by a user need to provide forecast information on extended-range time-scales (i.e., weeks 2-4), in recent years Climate Prediction Center (CPC) has made considerable efforts towards developing and testing the feasibility for developing the required forecasts. The forecasts targeting this particular time-scale face a unique challenge in that while the forecast skill due to atmospheric initial conditions is small (because of rapid decay in the memory associated with the atmospheric initial conditions), short time averages for which forecasts are made do not benefit from skill associated with anomalous boundary conditions either. Despite these challenges, CPC has embarked on providing an experimental outlook for weeks 3-4 average. The talk will summarize the current status of CPC's current suite of extended-range forecast products, and further, will discuss some future plans.
Exploring Cloud Computing for Large-scale Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Guang; Han, Binh; Yin, Jian
This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less
The Work Disability Functional Assessment Battery (WD-FAB): Feasibility and Psychometric Properties
Meterko, Mark; Marfeo, Elizabeth E.; McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Rasch, Elizabeth K; Brandt, Diane E.; Chan, Leighton
2015-01-01
Objectives To assess the feasibility and psychometric properties of eight scales covering two domains of the newly developed Work Disability Functional Assessment Battery (WD-FAB): physical function (PF) and behavioral health (BH) function. Design Cross-sectional. Setting Community. Participants Adults unable to work due to a physical (n=497) or mental (n=476) disability. Interventions None. Main Outcome Measures Each disability group responded to a survey consisting of the relevant WD-FAB scales and existing measures of established validity. The WD-FAB scales were evaluated with regard to data quality (score distribution; percent “I don’t know” responses), efficiency of administration (number of items required to achieve reliability criterion; time required to complete the scale) by computerized adaptive testing (CAT), and measurement accuracy as tested by person fit. Construct validity was assessed by examining both convergent and discriminant correlations between the WD-FAB scales and scores on same-domain and cross-domain established measures. Results Data quality was good and CAT efficiency was high across both WD-FAB domains. Measurement accuracy was very good for the PF scales; BH scales demonstrated more variability. Construct validity correlations, both convergent and divergent, between all WD-FAB scales and established measures were in the expected direction and range of magnitude. Conclusions The data quality, CAT efficacy, person fit and construct validity of the WD-FAB scales were well supported and suggest that the WD-FAB could be used to assess physical and behavioral health function related to work disability. Variation in scale performance suggests the need for future work on item replenishment and refinement, particularly regarding the Self-Efficacy scale. PMID:25528263
Computational aerodynamics development and outlook /Dryden Lecture in Research for 1979/
NASA Technical Reports Server (NTRS)
Chapman, D. R.
1979-01-01
Some past developments and current examples of computational aerodynamics are briefly reviewed. An assessment is made of the requirements on future computer memory and speed imposed by advanced numerical simulations, giving emphasis to the Reynolds averaged Navier-Stokes equations and to turbulent eddy simulations. Experimental scales of turbulence structure are used to determine the mesh spacings required to adequately resolve turbulent energy and shear. Assessment also is made of the changing market environment for developing future large computers, and of the projections of micro-electronics memory and logic technology that affect future computer capability. From the two assessments, estimates are formed of the future time scale in which various advanced types of aerodynamic flow simulations could become feasible. Areas of research judged especially relevant to future developments are noted.
NASA Astrophysics Data System (ADS)
Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia
2014-05-01
One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.
Zhang, Duan Z.; Padrino, Juan C.
2017-06-01
The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less
On the energy budget in the current disruption region. [of geomagnetic tail
NASA Technical Reports Server (NTRS)
Hesse, Michael; Birn, Joachim
1993-01-01
This study investigates the energy budget in the current disruption region of the magnetotail, coincident with a pre-onset thin current sheet, around substorm onset time using published observational data and theoretical estimates. We find that the current disruption/dipolarization process typically requires energy inflow into the primary disruption region. The disruption dipolarization process is therefore endoenergetic, i.e., requires energy input to operate. Therefore we argue that some other simultaneously operating process, possibly a large scale magnetotail instability, is required to provide the necessary energy input into the current disruption region.
Zan, Pengfei; Wu, Zhong; Yu, Xiao; Fan, Lin; Xu, Tianyang; Li, Guodong
2016-03-01
During total knee arthroplasty (TKA), surgical exposure requires mobilization technique of the patella. With this trial, we intended to investigate the effect of patella eversion on clinical outcome measures in simultaneous bilateral TKA. We prospectively enrolled 44 patients (88 knees) from April 2008 to June 20l4.One knee was operated with patella eversion (group A) and the other with patella lateral retraction (group B) randomly. Follow-up results, including the operation time, complications, and the time of achieving straight leg raise (SLR) and 90° knee flexion, were recorded. The data of range of motion (ROM) and Visual Analogue Scale score were collected separately at 7 days, 3 months, 6 months, and 1 year postoperatively. The time of achieving SLR was 2.7 ± 0.8 days in group A and 2.1 ± 0.7 DAYS in group B, which were significantly different (P = .032). Significant difference was found on active and passive ROM during the follow-up times between groups A and B, except the passive ROM at 6 months postoperatively. No significant difference was found on operation time, complications, patella baja or tilt, time of achieving 90°knee flexion, and Visual Analogue Scale score during the follow-up times. Patellar eversion was adverse to the early knee function recovery after TKA; it would delay the time of achieving SLR and decrease the passive and active ROM. In addition, more carefully and scientifically designed randomized controlled trials are still required to further prove the claim. Copyright © 2016 Elsevier Inc. All rights reserved.
Garra, Gregory; Singer, Adam J; Bamber, Danny; Chohan, Jasmine; Troxell, Regina; Thode, Henry C
2009-04-01
Ingestion of diatrizoate meglumine before abdominal computed tomography (CT) is time consuming. We hypothesized that pretreatment with metoclopramide or ondansetron would result in faster ingestion of diatrizoate meglumine than placebo. The study was a double-blind, randomized controlled trial on adults requiring oral contrast abdominal CT. Patients were randomized to placebo, metoclopramide 10 mg, or ondansetron 4 mg intravenously 15 minutes before ingesting 2 L of diatrizoate meglumine. The primary outcome was time to complete diatrizoate meglumine ingestion. Secondary outcome measures included volume of diatrizoate meglumine ingested, 100-mm visual analog scale for nausea at 15-minute intervals, time to CT, vomiting, and use of rescue antiemetics. The study was powered to detect a 60-minute difference in diatrizoate meglumine ingestion time between saline and medication groups. One hundred six patients were randomized; placebo (36), metoclopramide (35), and ondansetron (35). Groups were similar in baseline characteristics. Median (interquartile range) times for diatrizoate meglumine ingestion were placebo 109 minutes (82 to 135 minutes); metoclopramide 105 minutes (75 to 135 minutes); and ondansetron 110 minutes (79 to 140 minutes) (P=.67). Vomiting was less frequent with metoclopramide (3%) than placebo (18%) or ondansetron (9%) (P=.11). The visual analog scale for nausea at each point was not significantly different between groups (P=.11). The need for rescue antiemetics was lowest for metoclopramide (3%) compared with placebo (27%) and ondansetron (12%) (P=.02). Pretreatment with ondansetron or metoclopramide does not reduce oral contrast solution ingestion time.
Solving large scale structure in ten easy steps with COLA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less
City-scale expansion of human thermoregulatory costs.
Hill, Richard W; Muhich, Timothy E; Humphries, Murray M
2013-01-01
The physiological maintenance of a stable internal temperature by mammals and birds - the phenomenon termed homeothermy - is well known to be energetically expensive. The annual energy requirements of free-living mammals and birds are estimated to be 15-30 times higher than those of similar-size ectothermic vertebrates like lizards. Contemporary humans also use energy to accomplish thermoregulation. They are unique, however, in having shifted thermoregulatory control from the body to the occupied environment, with most people living in cities in dwellings that are temperature-regulated by furnaces and air conditioners powered by exogenous energy sources. The energetic implications of this strategy remain poorly defined. Here we comparatively quantify energy costs in cities, dwellings, and individual human bodies. Thermoregulation persists as a major driver of energy expenditure across these three scales, resulting in energy-versus-ambient-temperature relationships remarkably similar in shape. Incredibly, despite the many and diversified uses of network-delivered energy in modern societies, the energy requirements of six North American cities are as temperature-dependent as the energy requirements of isolated, individual homeotherms. However, the annual per-person energy cost of exogenously powered thermoregulation in cities and dwellings is 9-28 times higher than the cost of endogenous, metabolic thermoregulation of the human body. Shifting the locus of thermoregulatory control from the body to the dwelling achieves climate-independent thermal comfort. However, in an era of amplifying climate change driven by the carbon footprint of humanity, we must acknowledge the energetic extravagance of contemporary, city-scale thermoregulation, which prioritizes heat production over heat conservation.
City-Scale Expansion of Human Thermoregulatory Costs
Hill, Richard W.; Muhich, Timothy E.; Humphries, Murray M.
2013-01-01
The physiological maintenance of a stable internal temperature by mammals and birds – the phenomenon termed homeothermy – is well known to be energetically expensive. The annual energy requirements of free-living mammals and birds are estimated to be 15–30 times higher than those of similar-size ectothermic vertebrates like lizards. Contemporary humans also use energy to accomplish thermoregulation. They are unique, however, in having shifted thermoregulatory control from the body to the occupied environment, with most people living in cities in dwellings that are temperature-regulated by furnaces and air conditioners powered by exogenous energy sources. The energetic implications of this strategy remain poorly defined. Here we comparatively quantify energy costs in cities, dwellings, and individual human bodies. Thermoregulation persists as a major driver of energy expenditure across these three scales, resulting in energy-versus-ambient-temperature relationships remarkably similar in shape. Incredibly, despite the many and diversified uses of network-delivered energy in modern societies, the energy requirements of six North American cities are as temperature-dependent as the energy requirements of isolated, individual homeotherms. However, the annual per-person energy cost of exogenously powered thermoregulation in cities and dwellings is 9–28 times higher than the cost of endogenous, metabolic thermoregulation of the human body. Shifting the locus of thermoregulatory control from the body to the dwelling achieves climate-independent thermal comfort. However, in an era of amplifying climate change driven by the carbon footprint of humanity, we must acknowledge the energetic extravagance of contemporary, city-scale thermoregulation, which prioritizes heat production over heat conservation. PMID:24143181
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin
Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less
76 FR 3485 - Required Scale Tests
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
...-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration, USDA. ACTION... rule requires that regulated entities complete the first of the two scale tests between January 1 and June 30 of the calendar year. The remaining scale test must be completed between July 1 and December 31...
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Volatile element chemistry in the solar nebula - Na, K, F, Cl, Br, and P
NASA Technical Reports Server (NTRS)
Fegley, B., Jr.; Lewis, J. S.
1980-01-01
The results of the most extensive set to date of thermodynamic calculations on the equilibrium chemistry of several hundred compounds of the elements Na, K, F, Cl, Br, and P in a solar composition system are reported. Two extreme models of accretion are investigated. In one extreme complete chemical equilibrium between condensates and gases is maintained because the time scale for accretion is long compared to the time scale for cooling or dissipation of the nebula. Condensates formed in this homogeneous accretion model include several phases such as whitlockite, alkali feldspars, and apatite minerals which are found in chondrites. In the other extreme complete isolation of newly formed condensates from prior condensates and gases occurs due to a time scale for accretion that is short relative to the time required for nebular cooling or dissipation. The condensates produced in this heterogeneous accretion model include alkali sulfides, ammonium halides, and ammonium phosphates. None of these phases are found in chondrites. Available observations of the Na, K, F, Cl, Br, and P elemental abundances in the terrestrial planets are found to be compatible with the predictions of the homogeneous accretion model.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.
Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J
2016-05-26
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.
Stable functional networks exhibit consistent timing in the human brain.
Chapeton, Julio I; Inati, Sara K; Zaghloul, Kareem A
2017-03-01
Despite many advances in the study of large-scale human functional networks, the question of timing, stability, and direction of communication between cortical regions has not been fully addressed. At the cellular level, neuronal communication occurs through axons and dendrites, and the time required for such communication is well defined and preserved. At larger spatial scales, however, the relationship between timing, direction, and communication between brain regions is less clear. Here, we use a measure of effective connectivity to identify connections between brain regions that exhibit communication with consistent timing. We hypothesized that if two brain regions are communicating, then knowledge of the activity in one region should allow an external observer to better predict activity in the other region, and that such communication involves a consistent time delay. We examine this question using intracranial electroencephalography captured from nine human participants with medically refractory epilepsy. We use a coupling measure based on time-lagged mutual information to identify effective connections between brain regions that exhibit a statistically significant increase in average mutual information at a consistent time delay. These identified connections result in sparse, directed functional networks that are stable over minutes, hours, and days. Notably, the time delays associated with these connections are also highly preserved over multiple time scales. We characterize the anatomic locations of these connections, and find that the propagation of activity exhibits a preferred posterior to anterior temporal lobe direction, consistent across participants. Moreover, networks constructed from connections that reliably exhibit consistent timing between anatomic regions demonstrate features of a small-world architecture, with many reliable connections between anatomically neighbouring regions and few long range connections. Together, our results demonstrate that cortical regions exhibit functional relationships with well-defined and consistent timing, and the stability of these relationships over multiple time scales suggests that these stable pathways may be reliably and repeatedly used for large-scale cortical communication. Published by Oxford University Press on behalf of the Guarantors of Brain 2017. This work is written by US Government employees and is in the public domain in the United States.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
SIMSAT: An object oriented architecture for real-time satellite simulation
NASA Technical Reports Server (NTRS)
Williams, Adam P.
1993-01-01
Real-time satellite simulators are vital tools in the support of satellite missions. They are used in the testing of ground control systems, the training of operators, the validation of operational procedures, and the development of contingency plans. The simulators must provide high-fidelity modeling of the satellite, which requires detailed system information, much of which is not available until relatively near launch. The short time-scales and resulting high productivity required of such simulator developments culminates in the need for a reusable infrastructure which can be used as a basis for each simulator. This paper describes a major new simulation infrastructure package, the Software Infrastructure for Modelling Satellites (SIMSAT). It outlines the object oriented design methodology used, describes the resulting design, and discusses the advantages and disadvantages experienced in applying the methodology.
Corrections to the Shapiro Equation used to Predict Sweating and Water Requirements
2008-01-01
Nishi, Y., and A. P. Gagge. Effective temperature scale useful for hypobaric and hyperbaric environments. Aviat. Space Environ. Med. 48: 97-107, 1977...time series predictions of specific variables (35). Comparison of the original Shapiro equation predicting sweat loss and water requirements was...40 60 80 100 % O ff (+ ,m od el u nd er pr ed ic ts ;-, ov er pr ed ic ts ) It is clear from Figure 2’s plot of the residual values ( comparison
Method for determining how to operate and control wind turbine arrays in utility systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javid, S.H.; Hauth, R.L.; Younkins, T.D.
1984-01-01
A method for determining how utility wind turbine arrays should be controlled and operated on the load frequency control time-scale is presented. Initial considerations for setting wind turbine control requirements are followed by a description of open loop operation and of closed loop and feed forward wind turbine array control concepts. The impact of variations in array output on meeting minimum criteria are developed. The method for determining the required control functions is then presented and results are tabulated. (LEW)
The Algorithm Theoretical Basis Document for Level 1A Processing
NASA Technical Reports Server (NTRS)
Jester, Peggy L.; Hancock, David W., III
2012-01-01
The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.
Satellite voice broadcast. Volume 2: System study
NASA Technical Reports Server (NTRS)
Bachtell, E. E.; Bettadapur, S. S.; Coyner, J. V.; Farrell, C. E.
1985-01-01
The Technical Volume of the Satellite Broadcast System Study is presented. Designs are synthesized for direct sound broadcast satellite systems for HF-, VHF-, L-, and Ku-bands. Methods are developed and used to predict satellite weight, volume, and RF performance for the various concepts considered. Cost and schedule risk assessments are performed to predict time and cost required to implement selected concepts. Technology assessments and tradeoffs are made to identify critical enabling technologies that require development to bring technical risk to acceptable levels for full scale development.
Motor control by precisely timed spike patterns
Srivastava, Kyle H.; Holmes, Caroline M.; Vellema, Michiel; Pack, Andrea R.; Elemans, Coen P. H.; Nemenman, Ilya; Sober, Samuel J.
2017-01-01
A fundamental problem in neuroscience is understanding how sequences of action potentials (“spikes”) encode information about sensory signals and motor outputs. Although traditional theories assume that this information is conveyed by the total number of spikes fired within a specified time interval (spike rate), recent studies have shown that additional information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information in spike timing actually plays a role in brain function. By examining the activity of individual motor units (the muscle fibers innervated by a single motor neuron) and manipulating patterns of activation of these neurons, we provide both correlative and causal evidence that the nervous system uses millisecond-scale variations in the timing of spikes within multispike patterns to control a vertebrate behavior—namely, respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision. PMID:28100491
Wavelet Analysis of Turbulent Spots and Other Coherent Structures in Unsteady Transition
NASA Technical Reports Server (NTRS)
Lewalle, Jacques
1998-01-01
This is a secondary analysis of a portion of the Halstead data. The hot-film traces from an embedded stage of a low pressure turbine have been extensively analyzed by Halstead et al. In this project, wavelet analysis is used to develop the quantitative characterization of individual coherent structures in terms of size, amplitude, phase, convection speed, etc., as well as phase-averaged time scales. The purposes of the study are (1) to extract information about turbulent time scales for comparison with unsteady model results (e.g. k/epsilon). Phase-averaged maps of dominant time scales will be presented; and (2) to evaluate any differences between wake-induced and natural spots that might affect model performance. Preliminary results, subject to verification with data at higher frequency resolution, indicate that spot properties are independent of their phase relative to the wake footprints: therefore requirements for the physical content of models are kept relatively simple. Incidentally, we also observed that spot substructures can be traced over several stations; further study will examine their possible impact.
Dust Destruction in the ISM: A Re-Evaluation of Dust Lifetimes
NASA Technical Reports Server (NTRS)
Jones, A. P.; Nuth, J. A., III
2011-01-01
There is a long-standing conundrum in interstellar dust studies relating to the discrepancy between the time-scales for dust formation from evolved stars and the apparently more rapid destruction in supernova-generated shock waves. Aims. We re-examine some of the key issues relating to dust evolution and processing in the interstellar medium. Methods. We use recent and new constraints from observations, experiments, modelling and theory to re-evaluate dust formation in the interstellar medium (ISM). Results. We find that the discrepancy between the dust formation and destruction time-scales may not be as significant as has previously been assumed because of the very large uncertainties involved. Conclusions. The derived silicate dust lifetime could be compatible with its injection time-scale, given the inherent uncertainties in the dust lifetime calculation. The apparent need to re-form significant quantities of silicate dust in the tenuous interstellar medium may therefore not be a strong requirement. Carbonaceous matter, on the other hand, appears to be rapidly recycled in the ISM and, in contrast to silicates, there are viable mechanisms for its re-formation in the ISM.
Short and long-term energy intake patterns and their implications for human body weight regulation.
Chow, Carson C; Hall, Kevin D
2014-07-01
Adults consume millions of kilocalories over the course of a few years, but the typical weight gain amounts to only a few thousand kilocalories of stored energy. Furthermore, food intake is highly variable from day to day and yet body weight is remarkably stable. These facts have been used as evidence to support the hypothesis that human body weight is regulated by active control of food intake operating on both short and long time scales. Here, we demonstrate that active control of human food intake on short time scales is not required for body weight stability and that the current evidence for long term control of food intake is equivocal. To provide more data on this issue, we emphasize the urgent need for developing new methods for accurately measuring energy intake changes over long time scales. We propose that repeated body weight measurements can be used along with mathematical modeling to calculate long-term changes in energy intake and thereby quantify adherence to a diet intervention and provide dynamic feedback to individuals that seek to control their body weight. Published by Elsevier Inc.
Time-Resolved Small-Angle X-ray Scattering Reveals Millisecond Transitions of a DNA Origami Switch.
Bruetzel, Linda K; Walker, Philipp U; Gerling, Thomas; Dietz, Hendrik; Lipfert, Jan
2018-04-11
Self-assembled DNA structures enable creation of specific shapes at the nanometer-micrometer scale with molecular resolution. The construction of functional DNA assemblies will likely require dynamic structures that can undergo controllable conformational changes. DNA devices based on shape complementary stacking interactions have been demonstrated to undergo reversible conformational changes triggered by changes in ionic environment or temperature. An experimentally unexplored aspect is how quickly conformational transitions of large synthetic DNA origami structures can actually occur. Here, we use time-resolved small-angle X-ray scattering to monitor large-scale conformational transitions of a two-state DNA origami switch in free solution. We show that the DNA device switches from its open to its closed conformation upon addition of MgCl 2 in milliseconds, which is close to the theoretical diffusive speed limit. In contrast, measurements of the dimerization of DNA origami bricks reveal much slower and concentration-dependent assembly kinetics. DNA brick dimerization occurs on a time scale of minutes to hours suggesting that the kinetics depend on local concentration and molecular alignment.
Towards a Millennial Time-scale Vertical Deformation Field in Taiwan
NASA Astrophysics Data System (ADS)
Bordovaos, P. A.; Johnson, K. M.
2015-12-01
Pete Bordovalos and Kaj M. Johnson To better understand the feedbacks between erosion and deformation in Taiwan, we need constraints on the millennial time-scale vertical field. Dense GPS and leveling data sets in Taiwan provide measurements of the present-day vertical deformation field over the entire Taiwan island. However, it is unclear how much of this vertical field is transient (varies over earthquake cycle) or steady (over millennial time scale). A deformation model is required to decouple transient from steady deformation. This study takes a look at how the 82 mm/yr of convergence motion between the Eurasian plate and the Philippine Sea plate is distributed across the faults on Taiwan. We build a plate flexure model that consists of all known active faults and subduction zones cutting through an elastic plate supported by buoyancy. We use horizontal and vertical GPS data, leveling data, and geologic surface uplift rates with a Monte Carlo probabilistic inversion method to infer fault slip rates and locking depths on all faults. Using our model we examine how different fault geometries influence the estimates of distribution of slip along faults and deformation patterns.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Short and long-term energy intake patterns and their implications for human body weight regulation
Chow, Carson C.; Hall, Kevin D.
2014-01-01
Adults consume millions of kilocalories over the course of a few years, but the typical weight gain amounts to only a few thousand kilocalories of stored energy. Furthermore, food intake is highly variable from day to day and yet body weight is remarkably stable. These facts have been used as evidence to support the hypothesis that human body weight is regulated by active control of food intake operating on both short and long time scales. Here, we demonstrate that active control of human food intake on short time scales is not required for body weight stability and that the current evidence for long term control of food intake is equivocal. To provide more data on this issue, we emphasize the urgent need for developing new methods for accurately measuring energy intake changes over long time scales. We propose that repeated body weight measurements can be used along with mathematical modeling to calculate long-term changes in energy intake and thereby quantify adherence to a diet intervention and provide dynamic feedback to individuals that seek to control their body weight. PMID:24582679
NASA Technical Reports Server (NTRS)
1976-01-01
A million gallon-a-day sewage treatment plant in Huntington Beach, CA converts solid sewage to activated carbon which then treats incoming waste water. The plant is scaled up 100 times from a mobile unit NASA installed a year ago; another 100-fold scale-up will be required if technique is employed for widespread urban sewage treatment. This unique sewage-plant employed a serendipitous outgrowth of a need to manufacture activated carbon for rocket engine insulation. The process already exceeds new Environmental Protection Agency Standards Capital costs by 25% compared with conventional secondary treatment plants.
Versatile, High Quality and Scalable Continuous Flow Production of Metal-Organic Frameworks
Rubio-Martinez, Marta; Batten, Michael P.; Polyzos, Anastasios; Carey, Keri-Constanti; Mardel, James I.; Lim, Kok-Seng; Hill, Matthew R.
2014-01-01
Further deployment of Metal-Organic Frameworks in applied settings requires their ready preparation at scale. Expansion of typical batch processes can lead to unsuccessful or low quality synthesis for some systems. Here we report how continuous flow chemistry can be adapted as a versatile route to a range of MOFs, by emulating conditions of lab-scale batch synthesis. This delivers ready synthesis of three different MOFs, with surface areas that closely match theoretical maxima, with production rates of 60 g/h at extremely high space-time yields. PMID:24962145
Late time neutrino masses, the LSND experiment, and the cosmic microwave background.
Chacko, Z; Hall, Lawrence J; Oliver, Steven J; Perelstein, Maxim
2005-03-25
Models with low-scale breaking of global symmetries in the neutrino sector provide an alternative to the seesaw mechanism for understanding why neutrinos are light. Such models can easily incorporate light sterile neutrinos required by the Liquid Scintillator Neutrino Detector experiment. Furthermore, the constraints on the sterile neutrino properties from nucleosynthesis and large-scale structure can be removed due to the nonconventional cosmological evolution of neutrino masses and densities. We present explicit, fully realistic supersymmetric models, and discuss the characteristic signatures predicted in the angular distributions of the cosmic microwave background.
Numerical Simulation of a High Mach Number Jet Flow
NASA Technical Reports Server (NTRS)
Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.
1993-01-01
The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.
NASA Astrophysics Data System (ADS)
Ocampo, Carlos J.; Oldham, Carolyn E.; Sivapalan, Murugesu; Turner, Jeffrey V.
2006-12-01
Deciphering the connection between streamflows and nitrate (NO-3) discharge requires identification of the various water flow pathways within a catchment, and the different time-scales at which hydrological and biogeochemical processes occur. Despite the complexity of the processes involved, many catchments around the world present a characteristic flushing response of NO-3 export. Yet the controls on the flushing response, and how they vary across space and time, are still not clearly understood. In this paper, the flushing response of NO-3 export from a rural catchment in Western Australia was investigated using isotopic (deuterium), chemical (chloride, NO-3), and hydrometric data across different antecedent conditions and time-scales. The catchment streamflow was at all time-scales dominated by a pre-event water source, and the NO-3 discharge was correlated with the magnitude of areas contributing to saturation overland flow. The NO-3 discharge also appeared related to the shallow groundwater dynamics. Thus, the antecedent moisture condition of the catchment at seasonal and interannual time-scales had a major impact on the NO-3 flushing response. In particular, the dynamics of the shallow ephemeral perched aquifer drove a shift from hydrological controls on NO-3 discharge during the early flushing stage to an apparent biogeochemical control on NO-3 discharge during the steady decline stage of the flushing response. This temporally variable control hypothesis provides a new and alternative description of the mechanisms behind the commonly seen flushing response. Copyright
Berghmans, Johan M; Poley, Marten J; van der Ende, Jan; Weber, Frank; Van de Velde, Marc; Adriaenssens, Peter; Himpe, Dirk; Verhulst, Frank C; Utens, Elisabeth
2017-09-01
The modified Yale Preoperative Anxiety Scale is widely used to assess children's anxiety during induction of anesthesia, but requires training and its administration is time-consuming. A Visual Analog Scale, in contrast, requires no training, is easy-to-use and quickly completed. The aim of this study was to evaluate a Visual Analog Scale as a tool to assess anxiety during induction of anesthesia and to determine cut-offs to distinguish between anxious and nonanxious children. Four hundred and one children (1.5-16 years) scheduled for daytime surgery were included. Children's anxiety during induction was rated by parents and anesthesiologists on a Visual Analog Scale and by a trained observer on the modified Yale Preoperative Anxiety Scale. Psychometric properties assessed were: (i) concurrent validity (correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores); (ii) construct validity (differences between subgroups according to the children's age and the parents' anxiety as assessed by the State-Trait Anxiety Inventory); (iii) cross-informant agreement using Bland-Altman analysis; (iv) cut-offs to distinguish between anxious and nonanxious children (reference: modified Yale Preoperative Anxiety Scale ≥30). Correlations between parents' and anesthesiologists' Visual Analog Scale and modified Yale Preoperative Anxiety Scale scores were strong (0.68 and 0.73, respectively). Visual Analog Scale scores were higher for children ≤5 years compared to children aged ≥6. Visual Analog Scale scores of children of high-anxious parents were higher than those of low-anxious parents. The mean difference between parents' and anesthesiologists' Visual Analog Scale scores was 3.6, with 95% limits of agreement (-56.1 to 63.3). To classify anxious children, cut-offs for parents (≥37 mm) and anesthesiologists (≥30 mm) were established. The present data provide preliminary data for the validity of a Visual Analog Scale to assess children's anxiety during induction. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Penna, Daniele; Brocca, Luca; Zuecco, Giulia; Romano, Nunzio
2018-02-01
Indirect measurements of field-scale (hectometer grid-size) spatial-average near-surface soil moisture are becoming increasingly available by exploiting new-generation ground-based and satellite sensors. Nonetheless, modeling applications for water resources management require knowledge of plot-scale (1-5 m grid-size) soil moisture by using measurements through spatially-distributed sensor network systems. Since efforts to fulfill such requirements are not always possible due to time and budget constraints, alternative approaches are desirable. In this study, we explore the feasibility of determining spatial-average soil moisture and soil moisture patterns given the knowledge of long-term records of climate forcing data and topographic attributes. A downscaling approach is proposed that couples two different models: the Eco-Hydrological Bucket and Equilibrium Moisture from Topography. This approach helps identify the relative importance of two compound topographic indexes in explaining the spatial variation of soil moisture patterns, indicating valley- and hillslope-dependence controlled by lateral flow and radiative processes, respectively. The integrated model also detects temporal instability if the dominant type of topographic dependence changes with spatial-average soil moisture. Model application was carried out at three sites in different parts of Italy, each characterized by different environmental conditions. Prior calibration was performed by using sparse and sporadic soil moisture values measured by portable time domain reflectometry devices. Cross-site comparisons offer different interpretations in the explained spatial variation of soil moisture patterns, with time-invariant valley-dependence (site in northern Italy) and hillslope-dependence (site in southern Italy). The sources of soil moisture spatial variation at the site in central Italy are time-variant within the year and the seasonal change of topographic dependence can be conveniently correlated to a climate indicator such as the aridity index.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Managing high-bandwidth real-time data storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bigelow, David D.; Brandt, Scott A; Bent, John M
2009-09-23
There exist certain systems which generate real-time data at high bandwidth, but do not necessarily require the long-term retention of that data in normal conditions. In some cases, the data may not actually be useful, and in others, there may be too much data to permanently retain in long-term storage whether it is useful or not. However, certain portions of the data may be identified as being vitally important from time to time, and must therefore be retained for further analysis or permanent storage without interrupting the ongoing collection of new data. We have developed a system, Mahanaxar, intended tomore » address this problem. It provides quality of service guarantees for incoming real-time data streams and simultaneous access to already-recorded data on a best-effort basis utilizing any spare bandwidth. It has built in mechanisms for reliability and indexing, can scale upwards to meet increasing bandwidth requirements, and handles both small and large data elements equally well. We will show that a prototype version of this system provides better performance than a flat file (traditional filesystem) based version, particularly with regard to quality of service guarantees and hard real-time requirements.« less
St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.
2015-04-02
The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. However, how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer 'how far is far enough,' we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25–2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation lengthmore » $$\\xi $$ that falls at least as fast as $${{\\tau }^{-1}}$$ . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.« less
NASA Astrophysics Data System (ADS)
St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.
2015-04-01
The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation length ξ that falls at least as fast as {{τ }-1} . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.
Gibson, D.J.; Middleton, B.A.; Foster, K.; Honu, Y.A.K.; Hoyer, E.W.; Mathis, M.
2005-01-01
Question: Can patterns of species frequency in an old-field be explained within the context of a metapopulation model? Are the patterns observed related to time, spatial scale, disturbance, and nutrient availability? Location: Upland and lowland old-fields in Illinois, USA. Method: Species richness was recorded annually for seven years following plowing of an upland and lowland old-field subject to crossed fertilizer and disturbance treatments (mowing and rototilling). Species occupancy distributions were assessed with respect to the numbers of core and satellite species. Results: In both fields, species richness became higher in disturbed plots than in undisturbed plots over time, and decreased in fertilized plots irrespective of time. A bimodal pattern of species richness consistent with the Core-satellite species (CSS) hypothesis occurred in the initial seed bank and through the course of early succession. The identity of native and exotic core species (those present in > 90% of blocks) changed with time. Some core species from the seed bank became core species in the vegetation, albeit after several years. At the scale of individual plots, a bimodal fit consistent with the CSS hypothesis applied only in year 1 and rarely thereafter. Conclusions: The CSS hypothesis provides a metapopulation perspective for understanding patterns of species richness but requires the assessment of spatial and temporal scaling effects. Regional processes (e.g. propagule availability) at the largest scale have the greatest impact influencing community structure during early secondary succession. Local processes (e.g., disturbance and soil nutrients) are more important at smaller scales and place constraints on species establishment and community structure of both native and exotic species. Under the highest intensity of disturbance, exotic species may be able to use resources unavailable to, or unused by, native species. ?? IAVS; Opulus Press.
Liu, Xu; Chen, Haiping; Xue, Chen
2018-01-01
Objectives Emergency medical system for mass casualty incidents (EMS-MCIs) is a global issue. However, China lacks such studies extremely, which cannot meet the requirement of rapid decision-support system. This study aims to realize modeling EMS-MCIs in Shanghai, to improve mass casualty incident (MCI) rescue efficiency in China, and to provide a possible method of making rapid rescue decisions during MCIs. Methods This study established a system dynamics (SD) model of EMS-MCIs using the Vensim DSS program. Intervention scenarios were designed as adjusting scales of MCIs, allocation of ambulances, allocation of emergency medical staff, and efficiency of organization and command. Results Mortality increased with the increasing scale of MCIs, medical rescue capability of hospitals was relatively good, but the efficiency of organization and command was poor, and the prehospital time was too long. Mortality declined significantly when increasing ambulances and improving the efficiency of organization and command; triage and on-site first-aid time were shortened if increasing the availability of emergency medical staff. The effect was the most evident when 2,000 people were involved in MCIs; however, the influence was very small under the scale of 5,000 people. Conclusion The keys to decrease the mortality of MCIs were shortening the prehospital time and improving the efficiency of organization and command. For small-scale MCIs, improving the utilization rate of health resources was important in decreasing the mortality. For large-scale MCIs, increasing the number of ambulances and emergency medical professionals was the core to decrease prehospital time and mortality. For super-large-scale MCIs, increasing health resources was the premise. PMID:29440876
The Generic Resolution Advisor and Conflict Evaluator (GRACE) for Detect-And-Avoid (DAA) Systems
NASA Technical Reports Server (NTRS)
Abramson, Michael; Refai, Mohamad; Santiago, Confesor
2017-01-01
The paper describes the Generic Resolution Advisor and Conflict Evaluator (GRACE), a novel alerting and guidance algorithm that combines flexibility, robustness, and computational efficiency. GRACE is "generic" in that it makes no assumptions regarding temporal or spatial scales, aircraft performance, or its sensor and communication systems. Accordingly, GRACE is well suited to research applications where alerting and guidance is a central feature and requirements are fluid involving a wide range of aviation technologies. GRACE has been used at NASA in a number of real-time and fast-time experiments supporting evolving requirements of DAA research, including parametric studies, NAS-wide simulations, human-in-the-loop experiments, and live flight tests.
Experimental quantum computing to solve systems of linear equations.
Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei
2013-06-07
Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
Moscoso del Prado Martín, Fermín
2013-12-01
I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.
High-scale axions without isocurvature from inflationary dynamics
Kearney, John; Orlofsky, Nicholas; Pierce, Aaron
2016-05-31
Observable primordial tensor modes in the cosmic microwave background (CMB) would point to a high scale of inflation H I. If the scale of Peccei-Quinn (PQ) breaking f a is greater than H I/2π, CMB constraints on isocurvature naively rule out QCD axion dark matter. This assumes the potential of the axion is unmodified during inflation. We revisit models where inflationary dynamics modify the axion potential and discuss how isocurvature bounds can be relaxed. We find that models that rely solely on a larger PQ-breaking scale during inflation f I require either late-time dilution of the axion abundance or highlymore » super-Planckian f I that somehow does not dominate the inflationary energy density. Models that have enhanced explicit breaking of the PQ symmetry during inflation may allow f a close to the Planck scale. Lastly, avoiding disruption of inflationary dynamics provides important limits on the parameter space.« less
Inquiry-Based Experiments for Large-Scale Introduction to PCR and Restriction Enzyme Digests
ERIC Educational Resources Information Center
Johanson, Kelly E.; Watt, Terry J.
2015-01-01
Polymerase chain reaction and restriction endonuclease digest are important techniques that should be included in all Biochemistry and Molecular Biology laboratory curriculums. These techniques are frequently taught at an advanced level, requiring many hours of student and faculty time. Here we present two inquiry-based experiments that are…
ERIC Educational Resources Information Center
Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell
2017-01-01
Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…
ERIC Educational Resources Information Center
Wells, Robert D.; And Others
Prenatal appointment keeping is an important predictor of birth outcomes, yet many pregnant adolescents miss an excessive number of appointments. Since effective strategies for increasing appointment keeping require costly staff time, methods to predict relative risk for noncompliance with appointments might help delineate a circumscribed…
Stress and the Workplace: A Comparison of Occupational Fields.
ERIC Educational Resources Information Center
Matthews, Doris B.; Casteel, Jim Frank
Stress in various occupations is of interest to managers, counselors, and personnel workers. A study was undertaken to examine, through the use of self-report scales, stress-related characteristics of workers in occupations which require many and varied human interactions. Subjects were 244 full-time employees in six professions: health services,…
Comparing an annual and daily time-step model for predicting field-scale P loss
USDA-ARS?s Scientific Manuscript database
Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...
USDA-ARS?s Scientific Manuscript database
1. Resilience-based approaches are increasingly being called upon to inform ecosystem management, particularly in arid and semi-arid regions. This requires management frameworks that can assess ecosystem dynamics, both within and between alternative states, at relevant time scales. 2. We analysed l...
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
Chemical Variations in a Granitic Pluton and Its Surrounding Rocks.
Baird, A K; McIntyre, D B; Welday, E E; Madlem, K W
1964-10-09
New techniques of x-ray fluorescence spectrography have provided, for the first time, abundant data regarding chemical variability of granitic rocks on different scales. The results suggest that current designs of sampling plans for trend surface analysis should be modified; in particular several specimens, preferably drillcores, may be required at each locality.
A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
ERIC Educational Resources Information Center
Kharabe, Amol T.
2012-01-01
Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…
ERIC Educational Resources Information Center
Wilson, Sue
2013-01-01
Engaging successfully in the modern technological society requires a command of mathematics. Hence, successfully engaging with mathematics has social, economic and political implications. There has been a history over a long period of time of significant numbers of people not forming productive relationships with learning mathematics. Failure in…
Twenty-First Century Literacy: A Matter of Scale from Micro to Mega
ERIC Educational Resources Information Center
Brown, Abbie; Slagter van Tryon, Patricia J.
2010-01-01
Twenty-first century technologies require educators to look for new ways to teach literacy skills. Current communication methods are combinations of traditional and newer, network-driven forms. This article describes the changes twenty-first century technologies cause in the perception of time, size, distance, audience, and available data, and…
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)
2002-01-01
Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.
Viscous Dissipation and Heat Conduction in Binary Neutron-Star Mergers.
Alford, Mark G; Bovard, Luke; Hanauske, Matthias; Rezzolla, Luciano; Schwenzer, Kai
2018-01-26
Inferring the properties of dense matter is one of the most exciting prospects from the measurement of gravitational waves from neutron star mergers. However, it requires reliable numerical simulations that incorporate viscous dissipation and energy transport as these can play a significant role in the survival time of the post-merger object. We calculate time scales for typical forms of dissipation and find that thermal transport and shear viscosity will not be important unless neutrino trapping occurs, which requires temperatures above 10 MeV and gradients over length scales of 0.1 km or less. On the other hand, if direct-Urca processes remain suppressed, leaving modified-Urca processes to establish flavor equilibrium, then bulk viscous dissipation could provide significant damping to density oscillations right after merger. When comparing with data from state-of-the-art merger simulations, we find that the bulk viscosity takes values close to its resonant maximum in a typical merger, motivating a more careful assessment of the role of bulk viscous dissipation in the gravitational-wave signal from merging neutron stars.
Viscous Dissipation and Heat Conduction in Binary Neutron-Star Mergers
NASA Astrophysics Data System (ADS)
Alford, Mark G.; Bovard, Luke; Hanauske, Matthias; Rezzolla, Luciano; Schwenzer, Kai
2018-01-01
Inferring the properties of dense matter is one of the most exciting prospects from the measurement of gravitational waves from neutron star mergers. However, it requires reliable numerical simulations that incorporate viscous dissipation and energy transport as these can play a significant role in the survival time of the post-merger object. We calculate time scales for typical forms of dissipation and find that thermal transport and shear viscosity will not be important unless neutrino trapping occurs, which requires temperatures above 10 MeV and gradients over length scales of 0.1 km or less. On the other hand, if direct-Urca processes remain suppressed, leaving modified-Urca processes to establish flavor equilibrium, then bulk viscous dissipation could provide significant damping to density oscillations right after merger. When comparing with data from state-of-the-art merger simulations, we find that the bulk viscosity takes values close to its resonant maximum in a typical merger, motivating a more careful assessment of the role of bulk viscous dissipation in the gravitational-wave signal from merging neutron stars.
NASA Astrophysics Data System (ADS)
Takeda, Shun; Kumagai, Hiroshi
2018-02-01
Hyperpolarized (HP) noble gas has attracted attention in NMR / MRI. In an ultra-low magnetic field, the effectiveness of signal enhancement by HP noble gas should be required because reduction of the signal intensity is serious. One method of generating HP noble gas is spin exchange optical pumping which uses selective excitation of electrons of alkali metal vapor and spin transfer to nuclear spin by collision to noble gas. Although SEOP does not require extreme cooling or strong magnetic field, generally it required large-scale equipment including high power light source to generate HP noble gas with high efficiency. In this study, we construct a simply generation system of HP xenon-129 by SEOP with an ultralow magnetic field (up to 1 mT) and small-scale light source (about 1W). In addition, we measure in situ NMR signal at the same time, and then examine efficient conditions for SEOP in ultra-low magnetic fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs
NASA Astrophysics Data System (ADS)
Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes
2017-06-01
Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.
Upscaling of Hydraulic Conductivity using the Double Constraint Method
NASA Astrophysics Data System (ADS)
El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke
2013-04-01
The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.
Link calibration against receiver calibration: an assessment of GPS time transfer uncertainties
NASA Astrophysics Data System (ADS)
Rovera, G. D.; Torre, J.-M.; Sherwood, R.; Abgrall, M.; Courde, C.; Laas-Bourez, M.; Uhrich, P.
2014-10-01
We present a direct comparison between two different techniques for the relative calibration of time transfer between remote time scales when using the signals transmitted by the Global Positioning System (GPS). Relative calibration estimates the delay of equipment or the delay of a time transfer link with respect to reference equipment. It is based on the circulation of some travelling GPS equipment between the stations in the network, against which the local equipment is measured. Two techniques can be considered: first a station calibration by the computation of the hardware delays of the local GPS equipment; second the computation of a global hardware delay offset for the time transfer between the reference points of two remote time scales. This last technique is called a ‘link’ calibration, with respect to the other one, which is a ‘receiver’ calibration. The two techniques require different measurements on site, which change the uncertainty budgets, and we discuss this and related issues. We report on one calibration campaign organized during Autumn 2013 between Observatoire de Paris (OP), Paris, France, Observatoire de la Côte d'Azur (OCA), Calern, France, and NERC Space Geodesy Facility (SGF), Herstmonceux, United Kingdom. The travelling equipment comprised two GPS receivers of different types, along with the required signal generator and distribution amplifier, and one time interval counter. We show the different ways to compute uncertainty budgets, leading to improvement factors of 1.2 to 1.5 on the hardware delay uncertainties when comparing the relative link calibration to the relative receiver calibration.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
A new approach to data management and its impact on frequency control requirements
NASA Technical Reports Server (NTRS)
Blanchard, D. L.; Fuchs, A. J.; Chi, A. R.
1979-01-01
A new approach to data management consisting of spacecraft and data/information autonomy and its impact on frequency control requirements is presented. An autonomous spacecraft is capable of functioning without external intervention for up to 72 hr by enabling the sensors to make observations, maintaining its health and safety, and by using logical safety modes when anomalies occur. Data/information are made autonomous by associating all relevant ancillary data such as time, position, attitude, and sensor identification with the data/information record of an event onboard the spacecraft. This record is so constructed that the record of the event can be physically identified in a complete and self-contained record that is independent of all other data. All data within a packet will be time tagged to the needed accuracy, and the time markings from packet to packet will be coherent to a UTC time scale.
Toward a comprehensive landscape vegetation monitoring framework
NASA Astrophysics Data System (ADS)
Kennedy, Robert; Hughes, Joseph; Neeti, Neeti; Larrue, Tara; Gregory, Matthew; Roberts, Heather; Ohmann, Janet; Kane, Van; Kane, Jonathan; Hooper, Sam; Nelson, Peder; Cohen, Warren; Yang, Zhiqiang
2016-04-01
Blossoming Earth observation resources provide great opportunity to better understand land vegetation dynamics, but also require new techniques and frameworks to exploit their potential. Here, I describe several parallel projects that leverage time-series Landsat imagery to describe vegetation dynamics at regional and continental scales. At the core of these projects are the LandTrendr algorithms, which distill time-series earth observation data into periods of consistent long or short-duration dynamics. In one approach, we built an integrated, empirical framework to blend these algorithmically-processed time-series data with field data and lidar data to ascribe yearly change in forest biomass across the US states of Washington, Oregon, and California. In a separate project, we expanded from forest-only monitoring to full landscape land cover monitoring over the same regional scale, including both categorical class labels and continuous-field estimates. In these and other projects, we apply machine-learning approaches to ascribe all changes in vegetation to driving processes such as harvest, fire, urbanization, etc., allowing full description of both disturbance and recovery processes and drivers. Finally, we are moving toward extension of these same techniques to continental and eventually global scales using Google Earth Engine. Taken together, these approaches provide one framework for describing and understanding processes of change in vegetation communities at broad scales.
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Muller, Christoff
2015-01-01
Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).
Global hydrodynamic modelling of flood inundation in continental rivers: How can we achieve it?
NASA Astrophysics Data System (ADS)
Yamazaki, D.
2016-12-01
Global-scale modelling of river hydrodynamics is essential for understanding global hydrological cycle, and is also required in interdisciplinary research fields . Global river models have been developed continuously for more than two decades, but modelling river flow at a global scale is still a challenging topic because surface water movement in continental rivers is a multi-spatial-scale phenomena. We have to consider the basin-wide water balance (>1000km scale), while hydrodynamics in river channels and floodplains is regulated by much smaller-scale topography (<100m scale). For example, heavy precipitation in upstream regions may later cause flooding in farthest downstream reaches. In order to realistically simulate the timing and amplitude of flood wave propagation for a long distance, consideration of detailed local topography is unavoidable. I have developed the global hydrodynamic model CaMa-Flood to overcome this scale-discrepancy of continental river flow. The CaMa-Flood divides river basins into multiple "unit-catchments", and assumes the water level is uniform within each unit-catchment. One unit-catchment is assigned to each grid-box defined at the typical spatial resolution of global climate models (10 100 km scale). Adopting a uniform water level in a >10km river segment seems to be a big assumption, but it is actually a good approximation for hydrodynamic modelling of continental rivers. The number of grid points required for global hydrodynamic simulations is largely reduced by this "unit-catchment assumption". Alternative to calculating 2-dimensional floodplain flows as in regional flood models, the CaMa-Flood treats floodplain inundation in a unit-catchment as a sub-grid physics. The water level and inundated area in each unit-catchment are diagnosed from water volume using topography parameters derived from high-resolution digital elevation models. Thus, the CaMa-Flood is at least 1000 times computationally more efficient compared to regional flood inundation models while the reality of simulated flood dynamics is kept. I will explain in detail how the CaMa-Flood model has been constructed from high-resolution topography datasets, and how the model can be used for various interdisciplinary applications.
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Effects of Langmuir Turbulence on Reactive Tracers in the Upper Ocean
NASA Astrophysics Data System (ADS)
Smith, K.; Hamlington, P.; Niemeyer, K.; Fox-Kemper, B.; Lovenduski, N. S.
2017-12-01
Reactive tracers such as carbonate chemical species play important roles in the oceanic carbon cycle, allowing the ocean to hold 60 times more carbon than the atmosphere. However, uncertainties in regional ocean sinks for anthropogenic CO2 are still relatively high. Many carbonate species are non-conserved, flux across the air-sea interface, and react on time scales similar to those of ocean turbulent processes, such as small-scale wave-driven Langmuir turbulence. All of this complexity gives rise to heterogeneous tracer distributions that are not fully understood and can greatly affect the rate at which CO2 fluxes across the air-sea interface. In order to more accurately model the biogeochemistry of the ocean in Earth system models (ESMs), a better understanding of the fundamental interactions between these reactive tracers and relevant turbulent processes is required. Research on reacting flows in other contexts has shown that the most significant tracer-flow couplings occur when coherent structures in the flow have timescales that rival reaction time scales. Langmuir turbulence, a 3D, small-scale, wave-driven process, has length and time scales on the order of O(1-100m) and O(1-10min), respectively. Once CO2 transfers across the air-sea interface, it reacts with seawater in a series of reactions whose rate limiting steps have time scales of 10-25s. This similarity in scales warrants further examination into interactions between these small-scale physical and chemical processes. In this presentation, large eddy simulations are used to examine the evolution of reactive tracers in the presence of realistic upper ocean wave- and shear-driven turbulence. The reactive tracers examined are those specifically involved in non-biological carbonate chemistry. The strength of Langmuir turbulence is varied in order to determine a relationship between the degree of enhancement (or reduction) of carbon that is fluxed across the air-sea interface due to the presence of Langmuir turbulence. By examining different reaction chemistry and surface forcing scenarios, the coupled turbulence-reactive tracer dynamics are connected with spatial and statistical properties of the resulting tracer fields. These results, along with implications for development of reduced order reactive tracer models, are discussed.
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Two Coincidence Detectors for Spike Timing-Dependent Plasticity in Somatosensory Cortex
Bender, Vanessa A.; Bender, Kevin J.; Brasier, Daniel J.; Feldman, Daniel E.
2011-01-01
Many cortical synapses exhibit spike timing-dependent plasticity (STDP) in which the precise timing of presynaptic and postsynaptic spikes induces synaptic strengthening [long-term potentiation (LTP)] or weakening [long-term depression (LTD)]. Standard models posit a single, postsynaptic, NMDA receptor-based coincidence detector for LTP and LTD components of STDP. We show instead that STDP at layer 4 to layer 2/3 synapses in somatosensory (S1) cortex involves separate calcium sources and coincidence detection mechanisms for LTP and LTD. LTP showed classical NMDA receptor dependence. LTD was independent of postsynaptic NMDA receptors and instead required group I metabotropic glutamate receptors and calcium from voltage-sensitive channels and IP3 receptor-gated stores. Downstream of postsynaptic calcium, LTD required retrograde endocannabinoid signaling, leading to presynaptic LTD expression, and also required activation of apparently presynaptic NMDA receptors. These LTP and LTD mechanisms detected firing coincidence on ~25 and ~125 ms time scales, respectively, and combined to implement the overall STDP rule. These findings indicate that STDP is not a unitary process and suggest that endocannabinoid-dependent LTD may be relevant to cortical map plasticity. PMID:16624937
System analysis for technology transfer readiness assessment of horticultural postharvest
NASA Astrophysics Data System (ADS)
Hayuningtyas, M.; Djatna, T.
2018-04-01
Availability of postharvest technology is becoming abundant, but only a few technologies are applicable and useful to a wider community purposes. Based on this problem it requires a significant readiness level of transfer technology approach. This system is reliable to access readiness a technology with level, from 1-9 and to minimize time of transfer technology in every level, time required technology from the selection process can be minimum. Problem was solved by using Relief method to determine ranking by weighting feasible criteria on postharvest technology in each level and PERT (Program Evaluation Review Technique) to schedule. The results from ranking process of post-harvest technology in the field of horticulture is able to pass level 7. That, technology can be developed to increase into pilot scale and minimize time required for technological readiness on PERT with optimistic time of 7,9 years. Readiness level 9 shows that technology has been tested on the actual conditions also tied with estimated production price compared to competitors. This system can be used to determine readiness of technology innovation that is derived from agricultural raw materials and passes certain stages.
A theoretical study of hydrodynamic cavitation.
Arrojo, S; Benito, Y
2008-03-01
The optimization of hydrodynamic cavitation as an AOP requires identifying the key parameters and studying their effects on the process. Specific simulations of hydrodynamic bubbles reveal that time scales play a major role on the process. Rarefaction/compression periods generate a number of opposing effects which have demonstrated to be quantitatively different from those found in ultrasonic cavitation. Hydrodynamic cavitation can be upscaled and offers an energy efficient way of generating cavitation. On the other hand, the large characteristic time scales hinder bubble collapse and generate a low number of cavitation cycles per unit time. By controlling the pressure pulse through a flexible cavitation chamber design these limitations can be partially compensated. The chemical processes promoted by this technique are also different from those found in ultrasonic cavitation. Properties such as volatility or hydrophobicity determine the potential applicability of HC and therefore have to be taken into account.
Solute segregation kinetics and dislocation depinning in a binary alloy
NASA Astrophysics Data System (ADS)
Dontsova, E.; Rottler, J.; Sinclair, C. W.
2015-06-01
Static strain aging, a phenomenon caused by diffusion of solute atoms to dislocations, is an important contributor to the strength of substitutional alloys. Accurate modeling of this complex process requires both atomic spatial resolution and diffusional time scales, which is very challenging to achieve with commonly used atomistic computational methods. In this paper, we use the recently developed "diffusive molecular dynamics" (DMD) method that is capable of describing the kinetics of the solute segregation process at the atomic level while operating on diffusive time scales in a computationally efficient way. We study static strain aging in the Al-Mg system and calculate the depinning shear stress between edge and screw dislocations and their solute atmospheres formed for various waiting times with different solute content and for a range of temperatures. A simple phenomenological model is also proposed that describes the observed behavior of the critical shear stress as a function of segregation level.
Dynamical tuning for MPC using population games: A water supply network application.
Barreiro-Gomez, Julian; Ocampo-Martinez, Carlos; Quijano, Nicanor
2017-07-01
Model predictive control (MPC) is a suitable strategy for the control of large-scale systems that have multiple design requirements, e.g., multiple physical and operational constraints. Besides, an MPC controller is able to deal with multiple control objectives considering them within the cost function, which implies to determine a proper prioritization for each of the objectives. Furthermore, when the system has time-varying parameters and/or disturbances, the appropriate prioritization might vary along the time as well. This situation leads to the need of a dynamical tuning methodology. This paper addresses the dynamical tuning issue by using evolutionary game theory. The advantages of the proposed method are highlighted and tested over a large-scale water supply network with periodic time-varying disturbances. Finally, results are analyzed with respect to a multi-objective MPC controller that uses static tuning. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Development and Applications of a Mobile Ecogenomic Sensor
NASA Astrophysics Data System (ADS)
Yamahara, K.; Preston, C. M.; Pargett, D.; Jensen, S.; Roman, B.; Walz, K.; Birch, J. M.; Hobson, B.; Kieft, B.; Zhang, Y.; Ryan, J. P.; Chavez, F.; Scholin, C. A.
2016-12-01
Modern molecular biological analytical methods have revolutionized our understanding of organism diversity in the ocean. Such advancements have profound implications for use in environmental research and resource management. However, the application of such technology to comprehensively document biodiversity and understand ecosystem processes in an ocean setting will require repeated observations over vast space and time scales. A fundamental challenge associated with meeting that requirement is acquiring discrete samples over spatial scales and frequencies necessary to document cause-and-effect relationships that link biological processes to variable physical and chemical gradients in rapidly changing water masses. Accomplishing that objective using ships alone is not practical. We are working to overcome this fundamental challenge by developing a new generation of biological instrumentation, the third generation ESP (3G ESP). The 3G ESP is a robotic device that automates sample collection, preservation, and/or in situ processing for real-time target molecule detection. Here we present the development of the 3G ESP and its integration with a Tethys-class Long Range AUV (LRAUV), and demonstrate its ability to collect and preserve material for subsequent metagenomic and quantitative PCR (qPCR) analyses. Further, we elucidate the potential of employing multiple mobile ecogenomic sensors to monitor ocean biodiversity, as well as following ecosystems over time to reveal time/space relationships of biological processes in response to changing environmental conditions.
System-level view of geospace dynamics: Challenges for high-latitude ground-based observations
NASA Astrophysics Data System (ADS)
Donovan, E.
2014-12-01
Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlburg, Jill; Corones, James; Batchelor, Donald
Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less
NASA Astrophysics Data System (ADS)
Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.
2017-12-01
Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.
Bloem, Bastiaan R; Marinus, Johan; Almeida, Quincy; Dibble, Lee; Nieuwboer, Alice; Post, Bart; Ruzicka, Evzen; Goetz, Christopher; Stebbins, Glenn; Martinez-Martin, Pablo; Schrag, Anette
2016-09-01
Disorders of posture, gait, and balance in Parkinson's disease (PD) are common and debilitating. This MDS-commissioned task force assessed clinimetric properties of existing rating scales, questionnaires, and timed tests that assess these features in PD. A literature review was conducted. Identified instruments were evaluated systematically and classified as "recommended," "suggested," or "listed." Inclusion of rating scales was restricted to those that could be used readily in clinical research and practice. One rating scale was classified as "recommended" (UPDRS-derived Postural Instability and Gait Difficulty score) and 2 as "suggested" (Tinetti Balance Scale, Rating Scale for Gait Evaluation). Three scales requiring equipment (Berg Balance Scale, Mini-BESTest, Dynamic Gait Index) also fulfilled criteria for "recommended" and 2 for "suggested" (FOG score, Gait and Balance Scale). Four questionnaires were "recommended" (Freezing of Gait Questionnaire, Activities-specific Balance Confidence Scale, Falls Efficacy Scale, Survey of Activities, and Fear of Falling in the Elderly-Modified). Four tests were classified as "recommended" (6-minute and 10-m walk tests, Timed Up-and-Go, Functional Reach). We identified several questionnaires that adequately assess freezing of gait and balance confidence in PD and a number of useful clinical tests. However, most clinical rating scales for gait, balance, and posture perform suboptimally or have been evaluated insufficiently. No instrument comprehensively and separately evaluates all relevant PD-specific gait characteristics with good clinimetric properties, and none provides separate balance and gait scores with adequate content validity for PD. We therefore recommend the development of such a PD-specific, easily administered, comprehensive gait and balance scale that separately assesses all relevant constructs. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
On the sensitivity of annual streamflow to air temperature
Milly, Paul C.D.; Kam, Jonghun; Dunne, Krista A.
2018-01-01
Although interannual streamflow variability is primarily a result of precipitation variability, temperature also plays a role. The relative weakness of the temperature effect at the annual time scale hinders understanding, but may belie substantial importance on climatic time scales. Here we develop and evaluate a simple theory relating variations of streamflow and evapotranspiration (E) to those of precipitation (P) and temperature. The theory is based on extensions of the Budyko water‐balance hypothesis, the Priestley‐Taylor theory for potential evapotranspiration ( ), and a linear model of interannual basin storage. The theory implies that the temperature affects streamflow by modifying evapotranspiration through a Clausius‐Clapeyron‐like relation and through the sensitivity of net radiation to temperature. We apply and test (1) a previously introduced “strong” extension of the Budyko hypothesis, which requires that the function linking temporal variations of the evapotranspiration ratio (E/P) and the index of dryness ( /P) at an annual time scale is identical to that linking interbasin variations of the corresponding long‐term means, and (2) a “weak” extension, which requires only that the annual evapotranspiration ratio depends uniquely on the annual index of dryness, and that the form of that dependence need not be known a priori nor be identical across basins. In application of the weak extension, the readily observed sensitivity of streamflow to precipitation contains crucial information about the sensitivity to potential evapotranspiration and, thence, to temperature. Implementation of the strong extension is problematic, whereas the weak extension appears to capture essential controls of the temperature effect efficiently.
Boozer, Allen H.
2017-03-24
The potential for damage, the magnitude of the extrapolation, and the importance of the atypical—incidents that occur once in a thousand shots—make theory and simulation essential for ensuring that relativistic runaway electrons will not prevent ITER from achieving its mission. Most of the theoretical literature on electron runaway assumes magnetic surfaces exist. ITER planning for the avoidance of halo and runaway currents is focused on massive gas or shattered-pellet injection of impurities. In simulations of experiments, such injections lead to a rapid large-scale magnetic-surface breakup. Surface breakup, which is a magnetic reconnection, can occur on a quasi-ideal Alfvénic time scalemore » when the resistance is sufficiently small. Nevertheless, the removal of the bulk of the poloidal flux, as in halo-current mitigation, is on a resistive time scale. The acceleration of electrons to relativistic energies requires the confinement of some tubes of magnetic flux within the plasma and a resistive time scale. The interpretation of experiments on existing tokamaks and their extrapolation to ITER should carefully distinguish confined versus unconfined magnetic field lines and quasi-ideal versus resistive evolution. The separation of quasi-ideal from resistive evolution is extremely challenging numerically, but is greatly simplified by constraints of Maxwell’s equations, and in particular those associated with magnetic helicity. Thus, the physics of electron runaway along confined magnetic field lines is clarified by relations among the poloidal flux change required for an e-fold in the number of electrons, the energy distribution of the relativistic electrons, and the number of relativistic electron strikes that can be expected in a single disruption event.« less
Analysis of Thermal and Reaction Times for Hydrogen Reduction of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S.
2008-01-01
System analysis of oxygen production by hydrogen reduction of lunar regolith has shown the importance of the relative time scales for regolith heating and chemical reaction to overall performance. These values determine the sizing and power requirements of the system and also impact the number and operational phasing of reaction chambers. In this paper, a Nusselt number correlation analysis is performed to determine the heat transfer rates and regolith heat up times in a fluidized bed reactor heated by a central heating element (e.g., a resistively heated rod, or a solar concentrator heat pipe). A coupled chemical and transport model has also been developed for the chemical reduction of regolith by a continuous flow of hydrogen. The regolith conversion occurs on the surfaces of and within the regolith particles. Several important quantities are identified as a result of the above analyses. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the particle Reynolds number, the Archimedes number, and the time needed for hydrogen to diffuse into the pores of the regolith particles. The analysis is used to determine the heat up and reaction times and its application to NASA s oxygen production system modeling tool is noted.
Analysis of Thermal and Reaction Times for Hydrogen Reduction of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S.
2009-01-01
System analysis of oxygen production by hydrogen reduction of lunar regolith has shown the importance of the relative time scales for regolith heating and chemical reaction to overall performance. These values determine the sizing and power requirements of the system and also impact the number and operational phasing of reaction chambers. In this paper, a Nusselt number correlation analysis is performed to determine the heat transfer rates and regolith heat up times in a fluidized bed reactor heated by a central heating element (e.g., a resistively heated rod, or a solar concentrator heat pipe). A coupled chemical and transport model has also been developed for the chemical reduction of regolith by a continuous flow of hydrogen. The regolith conversion occurs on the surfaces of and within the regolith particles. Several important quantities are identified as a result of the above analyses. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the particle Reynolds number, the Archimedes number, and the time needed for hydrogen to diffuse into the pores of the regolith particles. The analysis is used to determine the heat up and reaction times and its application to NASA s oxygen production system modeling tool is noted.
Taffarel, Marilda Onghero; Luna, Stelio Pacca Loureiro; de Oliveira, Flavia Augusta; Cardoso, Guilherme Schiess; Alonso, Juliana de Moura; Pantoja, Jose Carlos; Brondani, Juliana Tabarelli; Love, Emma; Taylor, Polly; White, Kate; Murrell, Joanna C
2015-04-01
Quantification of pain plays a vital role in the diagnosis and management of pain in animals. In order to refine and validate an acute pain scale for horses a prospective, randomized, blinded study was conducted. Twenty-four client owned adult horses were recruited and allocated to one of four following groups: anaesthesia only (GA); pre-emptive analgesia and anaesthesia (GAA,); anaesthesia, castration and postoperative analgesia (GC); or pre-emptive analgesia, anaesthesia and castration (GCA). One investigator, unaware of the treatment group, assessed all horses at time-points before and after intervention and completed the pain scale. Videos were also obtained at these time-points and were evaluated by a further four blinded evaluators who also completed the scale. The data were used to investigate the relevance, specificity, criterion validity and inter- and intra-observer reliability of each item on the pain scale, and to evaluate construct validity and responsiveness of the scale. Construct validity was demonstrated by the observed differences in scores between the groups, four hours after anaesthetic recovery and before administration of systemic analgesia in the GC group. Inter- and intra-observer reliability for the items was only satisfactory. Subsequently the pain scale was refined, based on results for relevance, specificity and total item correlation. Scale refinement and exclusion of items that did not meet predefined requirements generated a selection of relevant pain behaviours in horses. After further validation for reliability, these may be used to evaluate pain under clinical and experimental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, James K.B.
Prediction of the substantial biologically mediated carbon flows in a rapidly changing and acidifying ocean requires model simulations informed by observations of key carbon cycle processes on the appropriate space and time scales. From 2000 to 2004, the National Oceanographic Partnership Program (NOPP) supported the development of the first low-cost fully-autonomous ocean profiling Carbon Explorers that demonstrated that year-round real-time observations of particulate organic carbon (POC) concentration and sedimentation could be achieved in the world's ocean. NOPP also initiated the development of a sensor for particulate inorganic carbon (PIC) suitable for operational deployment across all oceanographic platforms. As a result,more » PIC profile characterization that once required shipboard sample collection and shipboard or shore based laboratory analysis, is now possible to full ocean depth in real time using a 0.2W sensor operating at 24 Hz. NOPP developments further spawned US DOE support to develop the Carbon Flux Explorer, a free-vehicle capable of following hourly variations of particulate inorganic and organic carbon sedimentation from near surface to kilometer depths for seasons to years and capable of relaying contemporaneous observations via satellite. We have demonstrated the feasibility of real time - low cost carbon observations which are of fundamental value to carbon prediction and when further developed, will lead to a fully enhanced global carbon observatory capable of real time assessment of the ocean carbon sink, a needed constraint for assessment of carbon management policies on a global scale.« less
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Goldie, Fraser C; Fulton, Rachael L; Dawson, Jesse; Bluhmki, Erich; Lees, Kennedy R
2014-08-01
Clinical trials for acute ischemic stroke treatment require large numbers of participants and are expensive to conduct. Methods that enhance statistical power are therefore desirable. We explored whether this can be achieved by a measure incorporating both early and late measures of outcome (e.g. seven-day NIH Stroke Scale combined with 90-day modified Rankin scale). We analyzed sensitivity to treatment effect, using proportional odds logistic regression for ordinal scales and generalized estimating equation method for global outcomes, with all analyses adjusted for baseline severity and age. We ran simulations to assess relations between sample size and power for ordinal scales and corresponding global outcomes. We used R version 2·12·1 (R Development Core Team. R Foundation for Statistical Computing, Vienna, Austria) for simulations and SAS 9·2 (SAS Institute Inc., Cary, NC, USA) for all other analyses. Each scale considered for combination was sensitive to treatment effect in isolation. The mRS90 and NIHSS90 had adjusted odds ratio of 1·56 and 1·62, respectively. Adjusted odds ratio for global outcomes of the combination of mRS90 with NIHSS7 and NIHSS90 with NIHSS7 were 1·69 and 1·73, respectively. The smallest sample sizes required to generate statistical power ≥80% for mRS90, NIHSS7, and global outcomes of mRS90 and NIHSS7 combined and NIHSS90 and NIHSS7 combined were 500, 490, 400, and 380, respectively. When data concerning both early and late outcomes are combined into a global measure, there is increased sensitivity to treatment effect compared with solitary ordinal scales. This delivers a 20% reduction in required sample size at 80% power. Combining early with late outcomes merits further consideration. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.
Catastrophic ice lake collapse in Aram Chaos, Mars
NASA Astrophysics Data System (ADS)
Roda, Manuel; Kleinhans, Maarten G.; Zegers, Tanja E.; Oosthoek, Jelmer H. P.
2014-07-01
Hesperian chaotic terrains have been recognized as the source of outflow channels formed by catastrophic outflows. Four main scenarios have been proposed for the formation of chaotic terrains that involve different amounts of water and single or multiple outflow events. Here, we test these scenarios with morphological and structural analyses of imagery and elevation data for Aram Chaos in conjunction with numerical modeling of the morphological evolution of the catastrophic carving of the outflow valley. The morphological and geological analyses of Aram Chaos suggest large-scale collapse and subsidence (1500 m) of the entire area, which is consistent with a massive expulsion of liquid water from the subsurface in one single event. The combined observations suggest a complex process starting with the outflow of water from two small channels, followed by continuous groundwater sapping and headward erosion and ending with a catastrophic lake rim collapse and carving of the Aram Valley, which is synchronous with the 2.5 Ga stage of the Ares Vallis formation. The water volume and formative time scale required to carve the Aram channels indicate that a single, rapid (maximum tens of days) and catastrophic (flood volume of 9.3 × 104 km3) event carved the outflow channel. We conclude that a sub-ice lake collapse model can best explain the features of the Aram Chaos Valley system as well as the time scale required for its formation.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J
2017-07-14
In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Barrett, Charles A.
1999-01-01
Power systems with operating temperatures in the range of 815 to 982 C (1500 to 1800 F) frequently require alloys that can operate for long times at these temperatures. A critical requirement is that these alloys have adequate oxidation resistance. The alloys used in these power systems require thousands of hours of operating life with intermittent shutdown to room temperature. Intermittent power plant shutdowns, however, offer the possibility that the protective scale will tend to spall (i.e., crack and flake off) upon cooling, increasing the rate of oxidative attack in subsequent heating cycles. Thus, it is critical that candidate alloys be evaluated for cyclic oxidation behavior. It was determined that exposing test alloys to ten 1000-hr cycles in static air at 982 10 000-hr Cyclic Oxidation Behavior of 68 High-Temperature Co-, Fe-, and Ni-Base Alloys Evaluated at 982 C (1800 F) could give a reasonable simulation of long-time power plant operation. Iron- (Fe-), nickel- (Ni-), and cobalt- (Co-) based high-temperature alloys with sufficient chromium (Cr) and/or aluminum (Al) content can exhibit excellent oxidation resistance. The protective oxides formed by these classes of alloys are typically Cr2O3 and/or Al2O3, and are usually influenced by their Cr, or Cr and Al, content. Sixty-eight Co-, Fe-, and Ni-base high-temperature alloys, typical of those used at this temperature or higher, were used in this study. At the NASA Lewis Research Center, the alloys were tested and compared on the basis of their weight change as a function of time, x-ray diffraction of the protective scale composition, and the physical appearance of the exposed samples. Although final appearance and x-ray diffraction of the final scale products were two factors used to evaluate the oxidation resistance of each alloy, the main criterion was the oxidation kinetics inferred from the specific weight change versus time data. These data indicated a range of oxidation behavior including parabolic (typical of isothermal oxidation), paralinear, linear, and mixed-linear kinetics.
Extending the length and time scales of Gram-Schmidt Lyapunov vector computations
NASA Astrophysics Data System (ADS)
Costa, Anthony B.; Green, Jason R.
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Selective attention to temporal features on nested time scales.
Henry, Molly J; Herrmann, Björn; Obleser, Jonas
2015-02-01
Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hackl, Jason F.
The relative dispersion of one uid particle with respect to another is fundamentally related to the transport and mixing of contaminant species in turbulent flows. The most basic consequence of Kolmogorov's 1941 similarity hypotheses for relative dispersion, the Richardson-Obukhov law that mean-square pair separation distance
NASA Astrophysics Data System (ADS)
Wang, B.; Bauer, S.; Pfeiffer, W. T.
2015-12-01
Large scale energy storage will be required to mitigate offsets between electric energy demand and the fluctuating electric energy production from renewable sources like wind farms, if renewables dominate energy supply. Porous formations in the subsurface could provide the large storage capacities required if chemical energy carriers such as hydrogen gas produced during phases of energy surplus are stored. This work assesses the behavior of a porous media hydrogen storage operation through numerical scenario simulation of a synthetic, heterogeneous sandstone formation formed by an anticlinal structure. The structural model is parameterized using data available for the North German Basin as well as data given for formations with similar characteristics. Based on the geological setting at the storage site a total of 15 facies distributions is generated and the hydrological parameters are assigned accordingly. Hydraulic parameters are spatially distributed according to the facies present and include permeability, porosity relative permeability and capillary pressure. The storage is designed to supply energy in times of deficiency on the order of seven days, which represents the typical time span of weather conditions with no wind. It is found that using five injection/extraction wells 21.3 mio sm³ of hydrogen gas can be stored and retrieved to supply 62,688 MWh of energy within 7 days. This requires a ratio of working to cushion gas of 0.59. The retrievable energy within this time represents the demand of about 450000 people. Furthermore it is found that for longer storage times, larger gas volumes have to be used, for higher delivery rates additionally the number of wells has to be increased. The formation investigated here thus seems to offer sufficient capacity and deliverability to be used for a large scale hydrogen gas storage operation.
Salkin, J A; Stuchin, S A; Kummer, F J; Reininger, R
1995-11-01
Five types of commercial glove liners (within double latex gloves) were compared to single and double latex gloves for cut and puncture resistance and for relative manual dexterity and degree of sensibility. An apparatus was constructed to test glove-pseudofinger constructs in either a cutting or puncture mode. Cutting forces, cutting speed, and type of blade (serrated or scalpel blade) were varied and the time to cut-through measured by an electrical conductivity circuit. Penetration forces were similarly determined with a scalpel blade and a suture needle using a spring scale loading apparatus. Dexterity was measured with an object placement task among a group of orthopedic surgeons. Sensibility was assessed with Semmes-Weinstein monofilaments, two-point discrimination, and vibrametry using standard techniques and rating scales. A subjective evaluation was performed at the end of testing. Time to cut-through for the liners ranged from 2 to 30 seconds for a rapid oscillating scalpel and 4 to 40 seconds for a rapid oscillating serrated knife under minimal loads. When a 1 kg load was added, times to cut-through ranged from 0.4 to 1.0 second. In most cases, the liners were superior to double latex. On average, 100% more force was required to penetrate the liners with a scalpel and 50% more force was required to penetrate the liners with a suture needle compared to double latex. Object placement task times were not significantly liners compared to double latex gloves. Semmes-Weinstein monofilaments, two-point discrimination, and vibrametry showed no difference in sensibility among the various liners and double latex gloves. Subjects felt that the liners were minimally to moderately impairing. An acclimation period may be required for their effective use.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
76 FR 18348 - Required Scale Tests
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-04
... RIN 0580-AB10 Required Scale Tests AGENCY: Grain Inspection, Packers and Stockyards Administration... published a document in the Federal Register on January 20, 2011 (76 FR 3485), defining required scale tests...-month period following each test. * * * * * Alan R. Christian, Acting Administrator, Grain Inspection...
LOSS OF BLOOD AT OPERATION—A Method for Continuous Measurement
Borden, Fred W.
1957-01-01
A method for continuous measurement of surgical blood loss has been devised and has been used clinically in some 400 cases. The method combines volumetric measure of the suction loss and gravimetric measure of the sponge loss. The volumetric device automatically deducts the volume of rinse water used and thus measures the amount of blood collected in a metering cylinder. The suction loss scale shows continuously the amount of blood in the metering cylinder. The gravimetric device requires counting sponges into the weighing pan, and turning a dial scale to deduct the initial weight of the sponges. The volume of blood in the sponges is then read directly on the dial scale. Use of the instrument, which is under the supervision of the anesthesiologist, adds about two minutes per hour to the time normally required for counting the sponges; and about three minutes per hour is required for tending the volumetric instrument. In clinical use, knowing constantly the amount of blood loss permits the starting of transfusion before serious deficit develops, and then maintaining the patient's blood volume at a predetermined optimum level. In some 400 cases the continuous measurement of the blood loss served as a reliable guide for carrying out the loss-replacement plan within close limits of accuracy. ImagesFigure 2.p97-a PMID:13446754
NASA Technical Reports Server (NTRS)
Shidner, Jeremy D.; Davis, Jody L.; Cianciolo, Alicia D.; Samareh, Jamshid A.; Powell, RIchard W.
2010-01-01
Landing on Mars has been a challenging task. Past NASA missions have shown resilience to increases in spacecraft mass by scaling back requirements such as landing site altitude, landing site location and arrival time. Knowledge of the partials relating requirements to mass is critical for mission designers to understand so that the project can retain margin throughout the process. Looking forward to new missions that will land 1.5 metric tons or greater, the current level of technology is insufficient, and new technologies will need to be developed. Understanding the sensitivity of these new technologies to requirements is the purpose of this paper.
Design and application of a web-based real-time personal PM2.5 exposure monitoring system.
Sun, Qinghua; Zhuang, Jia; Du, Yanjun; Xu, Dandan; Li, Tiantian
2018-06-15
Growing demand from public health research for conduct large-scale epidemiological studies to explore health effect of PM 2.5 was well-documented. To address this need, we design a web-based real-time personal PM 2.5 exposure monitoring system (RPPM2.5 system) which can help researcher to get big data of personal PM 2.5 exposure with low-cost, low labor requirement, and low operating technical requirements. RPPM2.5 system can provide relative accurate real-time personal exposure data for individuals, researches, and decision maker. And this system has been used in a survey of PM 2.5 personal exposure level conducted in 5 cities of China and has provided mass of valuable data for epidemiological research. Copyright © 2018 Elsevier B.V. All rights reserved.
Rapid time-resolved diffraction studies of protein structures using synchrotron radiation
NASA Astrophysics Data System (ADS)
Bartunik, Hans D.; Bartunik, Lesley J.
1992-07-01
The crystal structure of intermediate states in biological reactions of proteins of multi-protein complexes may be studied by time-resolved X-ray diffraction techniques which make use of the high spectral brilliance, continuous wavelength distribution and pulsed time structure of synchrotron radiation. Laue diffraction methods provide a means of investigating intermediate structures with lifetimes in the millisecond time range at presently operational facilities. Third-generation storage rings which are under construction may permit one to reach a time resolution of one microsecond for non-cyclic and one nanosecond for cyclic reactions. The number of individual exposures required for exploring reciprocal space and hence the total time scale strongly depend on the lattice order that may be affected, e.g., by conformational changes. Time-resolved experiments require high population of a specific intermediate which has to be homogeneous over the crystal volume. A number of external excitation techniques have been developed including in situ liberation of active metabolites by laser pulse photolysis of photolabile inactive precursors. First applications to crystal structure analysis of catalytic intermediates of enzymes demonstrate the potential of time-resolved protein crystallography.
Real-time detection of antibiotic activity by measuring nanometer-scale bacterial deformation
NASA Astrophysics Data System (ADS)
Iriya, Rafael; Syal, Karan; Jing, Wenwen; Mo, Manni; Yu, Hui; Haydel, Shelley E.; Wang, Shaopeng; Tao, Nongjian
2017-12-01
Diagnosing antibiotic-resistant bacteria currently requires sensitive detection of phenotypic changes associated with antibiotic action on bacteria. Here, we present an optical imaging-based approach to quantify bacterial membrane deformation as a phenotypic feature in real-time with a nanometer scale (˜9 nm) detection limit. Using this approach, we found two types of antibiotic-induced membrane deformations in different bacterial strains: polymyxin B induced relatively uniform spatial deformation of Escherichia coli O157:H7 cells leading to change in cellular volume and ampicillin-induced localized spatial deformation leading to the formation of bulges or protrusions on uropathogenic E. coli CFT073 cells. We anticipate that the approach will contribute to understanding of antibiotic phenotypic effects on bacteria with a potential for applications in rapid antibiotic susceptibility testing.
Identifying and correcting non-Markov states in peptide conformational dynamics
NASA Astrophysics Data System (ADS)
Nerukh, Dmitry; Jensen, Christian H.; Glen, Robert C.
2010-02-01
Conformational transitions in proteins define their biological activity and can be investigated in detail using the Markov state model. The fundamental assumption on the transitions between the states, their Markov property, is critical in this framework. We test this assumption by analyzing the transitions obtained directly from the dynamics of a molecular dynamics simulated peptide valine-proline-alanine-leucine and states defined phenomenologically using clustering in dihedral space. We find that the transitions are Markovian at the time scale of ≈50 ps and longer. However, at the time scale of 30-40 ps the dynamics loses its Markov property. Our methodology reveals the mechanism that leads to non-Markov behavior. It also provides a way of regrouping the conformations into new states that now possess the required Markov property of their dynamics.
Coherent spin control of a nanocavity-enhanced qubit in diamond
Li, Luozhou; Lu, Ming; Schroder, Tim; ...
2015-01-28
A central aim of quantum information processing is the efficient entanglement of multiple stationary quantum memories via photons. Among solid-state systems, the nitrogen-vacancy centre in diamond has emerged as an excellent optically addressable memory with second-scale electron spin coherence times. Recently, quantum entanglement and teleportation have been shown between two nitrogen-vacancy memories, but scaling to larger networks requires more efficient spin-photon interfaces such as optical resonators. Here we report such nitrogen-vacancy nanocavity systems in strong Purcell regime with optical quality factors approaching 10,000 and electron spin coherence times exceeding 200 µs using a silicon hard-mask fabrication process. This spin-photon interfacemore » is integrated with on-chip microwave striplines for coherent spin control, providing an efficient quantum memory for quantum networks.« less
Minimizing the area required for time constants in integrated circuits
NASA Technical Reports Server (NTRS)
Lyons, J. C.
1972-01-01
When a medium- or large-scale integrated circuit is designed, efforts are usually made to avoid the use of resistor-capacitor time constant generators. The capacitor needed for this circuit usually takes up more surface area on the chip than several resistors and transistors. When the use of this network is unavoidable, the designer usually makes an effort to see that the choice of resistor and capacitor combinations is such that a minimum amount of surface area is consumed. The optimum ratio of resistance to capacitance that will result in this minimum area is equal to the ratio of resistance to capacitance which may be obtained from a unit of surface area for the particular process being used. The minimum area required is a function of the square root of the reciprocal of the products of the resistance and capacitance per unit area. This minimum occurs when the area required by the resistor is equal to the area required by the capacitor.
24 CFR 3280.207 - Requirements for foam plastic thermal insulating materials.
Code of Federal Regulations, 2012 CFR
2012-04-01
... include intensity of cavity fire (temperature-time) and post-test damage. (iii) Post-test damage... Technology Research Institute (IIT) Report, “Development of Mobile Home Fire Test Methods to Judge the Fire... Project J-6461, 1979” or other full-scale fire tests accepted by HUD, and it is installed in a manner...
24 CFR 3280.207 - Requirements for foam plastic thermal insulating materials.
Code of Federal Regulations, 2011 CFR
2011-04-01
... include intensity of cavity fire (temperature-time) and post-test damage. (iii) Post-test damage... Technology Research Institute (IIT) Report, “Development of Mobile Home Fire Test Methods to Judge the Fire... Project J-6461, 1979” or other full-scale fire tests accepted by HUD, and it is installed in a manner...
24 CFR 3280.207 - Requirements for foam plastic thermal insulating materials.
Code of Federal Regulations, 2010 CFR
2010-04-01
... include intensity of cavity fire (temperature-time) and post-test damage. (iii) Post-test damage... Technology Research Institute (IIT) Report, “Development of Mobile Home Fire Test Methods to Judge the Fire... Project J-6461, 1979” or other full-scale fire tests accepted by HUD, and it is installed in a manner...
24 CFR 3280.207 - Requirements for foam plastic thermal insulating materials.
Code of Federal Regulations, 2014 CFR
2014-04-01
... include intensity of cavity fire (temperature-time) and post-test damage. (iii) Post-test damage... Technology Research Institute (IIT) Report, “Development of Mobile Home Fire Test Methods to Judge the Fire... Project J-6461, 1979” or other full-scale fire tests accepted by HUD, and it is installed in a manner...
24 CFR 3280.207 - Requirements for foam plastic thermal insulating materials.
Code of Federal Regulations, 2013 CFR
2013-04-01
... include intensity of cavity fire (temperature-time) and post-test damage. (iii) Post-test damage... Technology Research Institute (IIT) Report, “Development of Mobile Home Fire Test Methods to Judge the Fire... Project J-6461, 1979” or other full-scale fire tests accepted by HUD, and it is installed in a manner...
Impact of a Faith-Based Social Justice Course on Pre-Service Teachers
ERIC Educational Resources Information Center
Critchfield, Meredith
2018-01-01
Rapid demographic shifts are occurring around the country. United States' public schools are more diverse than any time in history. To help prepare pre-service teachers for these shifts, this small-scale qualitative case study explored the impact of a required social justice course for pre-service educators at a large private Christian university…
Developing Symbolic Capacity One Step at a Time
ERIC Educational Resources Information Center
Huttenlocher, Janellen; Vasilyeva, Marina; Newcombe, Nora; Duffy, Sean
2008-01-01
The present research examines the ability of children as young as 4 years to use models in tasks that require scaling of distance along a single dimension. In Experiment 1, we found that tasks involving models are similar in difficulty to those involving maps that we studied earlier (Huttenlocher, J., Newcombe, N., & Vasilyeva, M. (1999). Spatial…
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Cognitive performance in women with fibromyalgia: A case-control study.
Pérez de Heredia-Torres, Marta; Huertas-Hoyas, Elisabet; Máximo-Bocanegra, Nuria; Palacios-Ceña, Domingo; Fernández-De-Las-Peñas, César
2016-10-01
This study aimed to evaluate the differences in cognitive skills between women with fibromyalgia and healthy women, and the correlations between functional independence and cognitive limitations. A cross-sectional study was performed. Twenty women with fibromyalgia and 20 matched controls participated. Outcomes included the Numerical Pain Rating Scale, the Functional Independence Measure, the Fibromyalgia Impact Questionnaire and Gradior © software. The Student's t-test and the Spearman's rho test were applied to the data. Women affected required a greater mean time (P < 0.020) and maximum time (P < 0.015) during the attention test than the healthy controls. In the memory test they displayed greater execution errors (P < 0.001), minimal time (P < 0.001) and mean time (P < 0.001) whereas, in the perception tests, they displayed a greater mean time (P < 0.009) and maximum time (P < 0.048). Correlations were found between the domains of the functional independence measure and the cognitive abilities assessed. Women with fibromyalgia exhibited a decreased cognitive ability compared to healthy controls, which negatively affected the performance of daily activities, such as upper limb dressing, feeding and personal hygiene. Patients required more time to perform activities requiring both attention and perception, decreasing their functional independence. Also, they displayed greater errors when performing activities requiring the use of memory. Occupational therapists treating women with fibromyalgia should consider the negative impact of possible cognitive deficits on the performance of daily activities and offer targeted support strategies. © 2016 Occupational Therapy Australia.
Getting the Big Picture: Design Considerations for a ngVLA Short Spacing Array
NASA Astrophysics Data System (ADS)
Mason, Brian Scott; Cotton, William; Condon, James; Kepley, Amanda; Selina, Rob; Murphy, Eric Joseph
2018-01-01
The Next Generation VLA (ngVLA) aims to provide a revolutionary increase in cm-wavelength collecting area and sensitivity while at the same time providing excellent image fidelity for a broad spectrum of science cases. Likely ngVLA configurations currently envisioned provide sensitivity over a very wide range of spatial scales. The antenna diameter (notionally 18 meters) fundamentally limits the largest angular scales that can be reached. One simple and powerful way to image larger angular scales is to build a complementary interferometer comprising a smaller number of smaller-diameter dishes.We have investigated the requirements that such an array would need to meet in order to usefully scientifically complement the ngVLA; this poster presents the results of our investigation.
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
Mistry, Punam; Batchelor, Hannah
2017-06-01
Regulatory guidelines require that any new medicine designed for a pediatric population must be demonstrated as being acceptable to that population. There is currently no guidance on how to conduct or report on acceptability testing. Our objective was to undertake a review of the methods used to assess the acceptability of medicines within a pediatric population and use this review to propose the most appropriate methodology. We used a defined search strategy to identify literature reports of acceptability assessments of medicines conducted within pediatric populations and extracted information about the tools used in these studies for comparison across studies. In total, 61 articles were included in the analysis. Palatability was the most common (54/61) attribute measured when evaluating acceptability. Simple scale methods were most commonly used, with visual analog scales (VAS) and hedonic scales used both separately and in combination in 34 of the 61 studies. Hedonic scales alone were used in 14 studies and VAS alone in just five studies. Other tools included Likert scales; forced choice or preference; surveys or questionnaires; observations of facial expressions during administration, ease of swallowing, or ability to swallow the dosage; prevalence of complaints or refusal to take the medicine; and time taken for a nurse to administer the medicine. The best scale in terms of validity, reliability, feasibility, and preference to use when assessing acceptability remains unclear. Further work is required to select the most appropriate method to justify whether a medicine is acceptable to a pediatric population.
NASA Astrophysics Data System (ADS)
Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.
2017-12-01
Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.
Computational Aerothermodynamics in Aeroassist Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.
A 'digital' technique for manual extraction of data from aerial photography
NASA Technical Reports Server (NTRS)
Istvan, L. B.; Bondy, M. T.
1977-01-01
The interpretation procedure described uses a grid cell approach. In addition, a random point is located in each cell. The procedure required that the cell/point grid be established on a base map, and identical grids be made to precisely match the scale of the photographic frames. The grid is then positioned on the photography by visual alignment to obvious features. Several alignments on one frame are sometimes required to make a precise match of all points to be interpreted. This system inherently corrects for distortions in the photography. Interpretation is then done cell by cell. In order to meet the time constraints, first order interpretation should be maintained. The data is put onto coding forms, along with other appropriate data, if desired. This 'digital' manual interpretation technique has proven to be efficient, and time and cost effective, while meeting strict requirements for data format and accuracy.
Mira: Argonne's 10-petaflops supercomputer
Papka, Michael; Coghlan, Susan; Isaacs, Eric; Peters, Mark; Messina, Paul
2018-02-13
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
Mira: Argonne's 10-petaflops supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papka, Michael; Coghlan, Susan; Isaacs, Eric
2013-07-03
Mira, Argonne's petascale IBM Blue Gene/Q system, ushers in a new era of scientific supercomputing at the Argonne Leadership Computing Facility. An engineering marvel, the 10-petaflops supercomputer is capable of carrying out 10 quadrillion calculations per second. As a machine for open science, any researcher with a question that requires large-scale computing resources can submit a proposal for time on Mira, typically in allocations of millions of core-hours, to run programs for their experiments. This adds up to billions of hours of computing time per year.
The Physical Origin of Long Gas Depletion Times in Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, Vadim A.; Kravtsov, Andrey V.; Gnedin, Nickolay Y.
2017-08-18
We present a model that elucidates why gas depletion times in galaxies are long compared to the time scales of the processes driving the evolution of the interstellar medium. We show that global depletion times are not set by any "bottleneck" in the process of gas evolution towards the star-forming state. Instead, depletion times are long because star-forming gas converts only a small fraction of its mass into stars before it is dispersed by dynamical and feedback processes. Thus, complete depletion requires that gas transitions between star-forming and non-star-forming states multiple times. Our model does not rely on the assumption of equilibrium and can be used to interpret trends of depletion times with the properties of observed galaxies and the parameters of star formation and feedback recipes in galaxy simulations. In particular, the model explains the mechanism by which feedback self-regulates star formation rate in simulations and makes it insensitive to the local star formation efficiency. We illustrate our model using the results of an isolatedmore » $$L_*$$-sized disk galaxy simulation that reproduces the observed Kennicutt-Schmidt relation for both molecular and atomic gas. Interestingly, the relation for molecular gas is close to linear on kiloparsec scales, even though a non-linear relation is adopted in simulation cells. This difference is due to stellar feedback, which breaks the self-similar scaling of the gas density PDF with the average gas surface density.« less
Development of a Reactor Model for Chemical Conversion of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hegde, U.; Balasubramaniam, R.; Gokoglu, S.
2009-01-01
Lunar regolith will be used for a variety of purposes such as oxygen and propellant production and manufacture of various materials. The design and development of chemical conversion reactors for processing lunar regolith will require an understanding of the coupling among the chemical, mass and energy transport processes occurring at the length and time scales of the overall reactor with those occurring at the corresponding scales of the regolith particles. To this end, a coupled transport model is developed using, as an example, the reduction of ilmenite-containing regolith by a continuous flow of hydrogen in a flow-through reactor. The ilmenite conversion occurs on the surface and within the regolith particles. As the ilmenite reduction proceeds, the hydrogen in the reactor is consumed, and this, in turn, affects the conversion rate of the ilmenite in the particles. Several important quantities are identified as a result of the analysis. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the time for hydrogen to diffuse into the pores of the regolith particles and the chemical reaction time. The paper investigates the relationships between these quantities and their impact on the regolith conversion. Application of the model to various chemical reactor types, such as fluidized-bed, packed-bed, and rotary-bed configurations, are discussed.
Development of a Reactor Model for Chemical Conversion of Lunar Regolith
NASA Technical Reports Server (NTRS)
Hedge, uday; Balasubramaniam, R.; Gokoglu, S.
2007-01-01
Lunar regolith will be used for a variety of purposes such as oxygen and propellant production and manufacture of various materials. The design and development of chemical conversion reactors for processing lunar regolith will require an understanding of the coupling among the chemical, mass and energy transport processes occurring at the length and time scales of the overall reactor with those occurring at the corresponding scales of the regolith particles. To this end, a coupled transport model is developed using, as an example, the reduction of ilmenite-containing regolith by a continuous flow of hydrogen in a flow-through reactor. The ilmenite conversion occurs on the surface and within the regolith particles. As the ilmenite reduction proceeds, the hydrogen in the reactor is consumed, and this, in turn, affects the conversion rate of the ilmenite in the particles. Several important quantities are identified as a result of the analysis. Reactor scale parameters include the void fraction (i.e., the fraction of the reactor volume not occupied by the regolith particles) and the residence time of hydrogen in the reactor. Particle scale quantities include the time for hydrogen to diffuse into the pores of the regolith particles and the chemical reaction time. The paper investigates the relationships between these quantities and their impact on the regolith conversion. Application of the model to various chemical reactor types, such as fluidized-bed, packed-bed, and rotary-bed configurations, are discussed.
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks
Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher
2018-01-01
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040
Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.
Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G
2018-04-03
Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.
The chip-scale atomic clock : prototype evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mescher, Mark; Varghese, Mathew; Lutwak, Robert
2007-12-01
The authors have developed a chip-scale atomic clock (CSAC) for applications requiring atomic timing accuracy in portable battery-powered applications. At PTTI/FCS 2005, they reported on the demonstration of a prototype CSAC, with an overall size of 10 cm{sup 3}, power consumption > 150 mW, and short-term stability sy(t) < 1 x 10-9t-1/2. Since that report, they have completed the development of the CSAC, including provision for autonomous lock acquisition and a calibrated output at 10.0 MHz, in addition to modifications to the physics package and system architecture to improve performance and manufacturability.
Friction-Stir Welding of Large Scale Cryogenic Fuel Tanks for Aerospace Applications
NASA Technical Reports Server (NTRS)
Jones, Clyde S., III; Venable, Richard A.
1998-01-01
The Marshall Space Flight Center has established a facility for the joining of large-scale aluminum-lithium alloy 2195 cryogenic fuel tanks using the friction-stir welding process. Longitudinal welds, approximately five meters in length, were made possible by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and the spindle travel system will be described in this paper. Process controls and real-time data acquisition will also be described, and were critical elements contributing to successful weld operation.
Long working distance incoherent interference microscope
Sinclair, Michael B [Albuquerque, NM; De Boer, Maarten P [Albuquerque, NM
2006-04-25
A full-field imaging, long working distance, incoherent interference microscope suitable for three-dimensional imaging and metrology of MEMS devices and test structures on a standard microelectronics probe station. A long working distance greater than 10 mm allows standard probes or probe cards to be used. This enables nanometer-scale 3-dimensional height profiles of MEMS test structures to be acquired across an entire wafer while being actively probed, and, optionally, through a transparent window. An optically identical pair of sample and reference arm objectives is not required, which reduces the overall system cost, and also the cost and time required to change sample magnifications. Using a LED source, high magnification (e.g., 50.times.) can be obtained having excellent image quality, straight fringes, and high fringe contrast.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
The role of internal climate variability for interpreting climate change scenarios
NASA Astrophysics Data System (ADS)
Maraun, Douglas
2013-04-01
When communicating information on climate change, the use of multi-model ensembles has been advocated to sample uncertainties over a range as wide as possible. To meet the demand for easily accessible results, the ensemble is often summarised by its multi-model mean signal. In rare cases, additional uncertainty measures are given to avoid loosing all information on the ensemble spread, e.g., the highest and lowest projected values. Such approaches, however, disregard the fundamentally different nature of the different types of uncertainties and might cause wrong interpretations and subsequently wrong decisions for adaptation. Whereas scenario and climate model uncertainties are of epistemic nature, i.e., caused by an in principle reducible lack of knowledge, uncertainties due to internal climate variability are aleatory, i.e., inherently stochastic and irreducible. As wisely stated in the proverb "climate is what you expect, weather is what you get", a specific region will experience one stochastic realisation of the climate system, but never exactly the expected climate change signal as given by a multi model mean. Depending on the meteorological variable, region and lead time, the signal might be strong or weak compared to the stochastic component. In cases of a low signal-to-noise ratio, even if the climate change signal is a well defined trend, no trends or even opposite trends might be experienced. Here I propose to use the time of emergence (TOE) to quantify and communicate when climate change trends will exceed the internal variability. The TOE provides a useful measure for end users to assess the time horizon for implementing adaptation measures. Furthermore, internal variability is scale dependent - the more local the scale, the stronger the influence of internal climate variability. Thus investigating the TOE as a function of spatial scale could help to assess the required spatial scale for implementing adaptation measures. I exemplify this proposal with a recently published study on the TOE for mean and heavy precipitation trends in Europe. In some regions trends emerge only late in the 21st century or even later, suggesting that in these regions adaptation to internal variability rather than to climate change is required. Yet in other regions the climate change signal is strong, urging for timely adaptation. Douglas Maraun, When at what scale will trends in European mean and heavy precipitation emerge? Env. Res. Lett., in press, 2013.
Simulating recurrent event data with hazard functions defined on a total time scale.
Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald
2015-03-08
In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.
NASA Astrophysics Data System (ADS)
Duveiller, G.; Donatelli, M.; Fumagalli, D.; Zucchini, A.; Nelson, R.; Baruth, B.
2017-02-01
Coupled atmosphere-ocean general circulation models (GCMs) simulate different realizations of possible future climates at global scale under contrasting scenarios of land-use and greenhouse gas emissions. Such data require several additional processing steps before it can be used to drive impact models. Spatial downscaling, typically by regional climate models (RCM), and bias-correction are two such steps that have already been addressed for Europe. Yet, the errors in resulting daily meteorological variables may be too large for specific model applications. Crop simulation models are particularly sensitive to these inconsistencies and thus require further processing of GCM-RCM outputs. Moreover, crop models are often run in a stochastic manner by using various plausible weather time series (often generated using stochastic weather generators) to represent climate time scale for a period of interest (e.g. 2000 ± 15 years), while GCM simulations typically provide a single time series for a given emission scenario. To inform agricultural policy-making, data on near- and medium-term decadal time scale is mostly requested, e.g. 2020 or 2030. Taking a sample of multiple years from these unique time series to represent time horizons in the near future is particularly problematic because selecting overlapping years may lead to spurious trends, creating artefacts in the results of the impact model simulations. This paper presents a database of consolidated and coherent future daily weather data for Europe that addresses these problems. Input data consist of daily temperature and precipitation from three dynamically downscaled and bias-corrected regional climate simulations of the IPCC A1B emission scenario created within the ENSEMBLES project. Solar radiation is estimated from temperature based on an auto-calibration procedure. Wind speed and relative air humidity are collected from historical series. From these variables, reference evapotranspiration and vapour pressure deficit are estimated ensuring consistency within daily records. The weather generator ClimGen is then used to create 30 synthetic years of all variables to characterize the time horizons of 2000, 2020 and 2030, which can readily be used for crop modelling studies.
The Nested Regional Climate Model: An Approach Toward Prediction Across Scales
NASA Astrophysics Data System (ADS)
Hurrell, J. W.; Holland, G. J.; Large, W. G.
2008-12-01
The reality of global climate change has become accepted and society is rapidly moving to questions of consequences on space and time scales that are relevant to proper planning and development of adaptation strategies. There are a number of urgent challenges for the scientific community related to improved and more detailed predictions of regional climate change on decadal time scales. Two important examples are potential impacts of climate change on North Atlantic hurricane activity and on water resources over the intermountain West. The latter is dominated by complex topography, so that accurate simulations of regional climate variability and change require much finer spatial resolution than is provided with state-of-the-art climate models. Climate models also do not explicitly resolve tropical cyclones, even though these storms have dramatic societal impacts and play an important role in regulating climate. Moreover, the debate over the impact of global warming on tropical cyclones has at times been acrimonious, and the lack of hard evidence has left open opportunities for misinterpretation and justification of pre-existing beliefs. These and similar topics are being assessed at NCAR, in partnership with university colleagues, through the development of a Nested Regional Climate Model (NRCM). This is an ambitious effort to combine a state of the science mesoscale weather model (WRF), a high resolution regional ocean modeling system (ROMS), and a climate model (CCSM) to better simulate the complex, multi-scale interactions intrinsic to atmospheric and oceanic fluid motions that are limiting our ability to predict likely future changes in regional weather statistics and climate. The NRCM effort is attracting a large base of earth system scientists together with societal groups as diverse as the Western Governor's Association and the offshore oil industry. All of these groups require climate data on scales of a few kilometers (or less), so that the NRCM program is producing unique data sets of climate change scenarios of immense interest. In addition, all simulations are archived in a form that will be readily accessible to other researchers, thus enabling a wider group to investigate these important issues.
Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)
NASA Astrophysics Data System (ADS)
Jerolmack, D. J.
2010-12-01
The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic transport laws. Flooding on an experimental alluvial fan. Intensity is related to the cumulative amount of time flow has visited an area of the fan over the experiment. Dark areas represent an emergent channel network resulting from stochastic migration of river channels.
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
Pore scale study of multiphase multicomponent reactive transport during CO2 dissolution trapping
NASA Astrophysics Data System (ADS)
Chen, Li; Wang, Mengyi; Kang, Qinjun; Tao, Wenquan
2018-06-01
Solubility trapping is crucial for permanent CO2 sequestration in deep saline aquifers. For the first time, a pore-scale numerical method is developed to investigate coupled scCO2-water two-phase flow, multicomponent (CO2(aq), H+, HCO3-, CO32- and OH-) mass transport, heterogeneous interfacial dissolution reaction, and homogeneous dissociation reactions. Pore-scale details of evolutions of multiphase distributions and concentration fields are presented and discussed. Time evolutions of several variables including averaged CO2(aq) concentration, scCO2 saturation, and pH value are analyzed. Specific interfacial length, an important variable which cannot be determined but is required by continuum models, is investigated in detail. Mass transport coefficient or efficient dissolution rate is also evaluated. The pore-scale results show strong non-equilibrium characteristics during solubility trapping due to non-uniform distributions of multiphase as well as slow mass transport process. Complicated coupling mechanisms between multiphase flow, mass transport and chemical reactions are also revealed. Finally, effects of wettability are also studied. The pore-scale studies provide deep understanding of non-linear non-equilibrium multiple physicochemical processes during CO2 solubility trapping processes, and also allow to quantitatively predict some important empirical relationships, such as saturation-interfacial surface area, for continuum models.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant
Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.
2016-01-01
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.
Yaeli, Steve; Meir, Ron
2010-01-01
Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.
Absolute Position Encoders With Vertical Image Binning
NASA Technical Reports Server (NTRS)
Leviton, Douglas B.
2005-01-01
Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.
Unravelling the mysteries of sub-second biochemical processes using time-resolved mass spectrometry.
Lento, Cristina; Wilson, Derek J
2017-05-21
Many important chemical and biochemical phenomena proceed on sub-second time scales before entering equilibrium. In this mini-review, we explore the history and recent advancements of time-resolved mass spectrometry (TRMS) for the characterization of millisecond time-scale chemical reactions and biochemical processes. TRMS allows for the simultaneous tracking of multiple reactants, intermediates and products with no chromophoric species required, high sensitivity and temporal resolution. The method has most recently been used for the characterization of several short-lived reaction intermediates in rapid chemical reactions. Most of the reactions that occur in living organisms are accelerated by enzymes, with pre-steady state kinetics only attainable using time-resolved methods. TRMS has been increasingly used to monitor the conversion of substrates to products and the resulting changes to the enzyme during catalytic turnover. Early events in protein folding systems have also been elucidated, along with the characterization of dynamics and transient secondary structures in intrinsically disordered proteins. In this review, we will highlight representative examples where TRMS has been applied to study these phenomena.
Timed activity performance in persons with upper limb amputation: A preliminary study.
Resnik, Linda; Borgia, Mathew; Acluche, Frantzy
55 subjects with upper limb amputation were administered the T-MAP twice within one week. To develop a timed measure of activity performance for persons with upper limb amputation (T-MAP); examine the measure's internal consistency, test-retest reliability and validity; and compare scores by prosthesis use. Measures of activity performance for persons with upper limb amputation are needed The time required to perform daily activities is a meaningful metric that implication for participation in life roles. Internal consistency and test-retest reliability were evaluated. Construct validity was examined by comparing scores by amputation level. Exploratory analyses compared sub-group scores, and examined correlations with other measures. Scale alpha was 0.77, ICC was 0.93. Timed scores differed by amputation level. Subjects using a prosthesis took longer to perform all tasks. T-MAP was not correlated with other measures of dexterity or activity, but was correlated with pain for non-prosthesis users. The timed scale had adequate internal consistency and excellent test-retest reliability. Analyses support reliability and construct validity of the T-MAP. 2c "outcomes" research. Published by Elsevier Inc.
Lynch, Michael S; Slenkamp, Karla M; Cheng, Mark; Khalil, Munira
2012-07-05
Obtaining a detailed description of photochemical reactions in solution requires measuring time-evolving structural dynamics of transient chemical species on ultrafast time scales. Time-resolved vibrational spectroscopies are sensitive probes of molecular structure and dynamics in solution. In this work, we develop doubly resonant fifth-order nonlinear visible-infrared spectroscopies to probe nonequilibrium vibrational dynamics among coupled high-frequency vibrations during an ultrafast charge transfer process using a heterodyne detection scheme. The method enables the simultaneous collection of third- and fifth-order signals, which respectively measure vibrational dynamics occurring on electronic ground and excited states on a femtosecond time scale. Our data collection and analysis strategy allows transient dispersed vibrational echo (t-DVE) and dispersed pump-probe (t-DPP) spectra to be extracted as a function of electronic and vibrational population periods with high signal-to-noise ratio (S/N > 25). We discuss how fifth-order experiments can measure (i) time-dependent anharmonic vibrational couplings, (ii) nonequilibrium frequency-frequency correlation functions, (iii) incoherent and coherent vibrational relaxation and transfer dynamics, and (iv) coherent vibrational and electronic (vibronic) coupling as a function of a photochemical reaction.
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Vial, Flavie; Tedder, Andrew
2017-01-01
Food-animal production businesses are part of a data-driven ecosystem shaped by stringent requirements for traceability along the value chain and the expanding capabilities of connected products. Within this sector, the generation of animal health intelligence, in particular, in terms of antimicrobial usage, is hindered by the lack of a centralized framework for data storage and usage. In this Perspective, we delimit the 11 processes required for evidence-based decisions and explore processes 3 (digital data acquisition) to 10 (communication to decision-makers) in more depth. We argue that small agribusinesses disproportionally face challenges related to economies of scale given the high price of equipment and services. There are two main areas of concern regarding the collection and usage of digital farm data. First, recording platforms must be developed with the needs and constraints of small businesses in mind and move away from local data storage, which hinders data accessibility and interoperability. Second, such data are unstructured and exhibit properties that can prove challenging to its near real-time preprocessing and analysis in a sector that is largely lagging behind others in terms of computing infrastructure and buying into digital technologies. To complete the digital transformation of this sector, investment in rural digital infrastructure is required alongside the development of new business models to empower small businesses to commit to near real-time data capture. This approach will deliver critical information to fill gaps in our understanding of emerging diseases and antimicrobial resistance in production animals, eventually leading to effective evidence-based policies.
Long-term movement patterns of a coral reef predator
NASA Astrophysics Data System (ADS)
Heupel, M. R.; Simpfendorfer, C. A.
2015-06-01
Long-term monitoring is required to fully define periodicity and patterns in animal movement. This is particularly relevant for defining what factors are driving the presence, location, and movements of individuals. The long-term movement and space use patterns of grey reef sharks, Carcharhinus amblyrhynchos, were examined on a whole of reef scale in the southern Great Barrier Reef to define whether movement and activity space varied through time. Twenty-nine C. amblyrhynchos were tracked for over 2 years to define movement patterns. All individuals showed high residency within the study site, but also had high roaming indices. This indicated that individuals remained in the region and used all of the monitored habitat (i.e., the entire reef perimeter). Use of space was consistent through time with high reuse of areas most of the year. Therefore, individuals maintained discrete home ranges, but undertook broader movements around the reef at times. Mature males showed greatest variation in movement with larger activity spaces and movement into new regions during the mating season (August-September). Depth use patterns also differed, suggesting behaviour or resource requirements varied between sexes. Examination of the long-term, reef-scale movements of C. amblyrhynchos has revealed that reproductive activity may play a key role in space use and activity patterns. It was unclear whether mating behaviour or an increased need for food to sustain reproductive activity and development played a greater role in these patterns. Reef shark movement patterns are becoming more clearly defined, but research is still required to fully understand the biological drivers for the observed patterns.
Postoperative Intravenous Acetaminophen for Craniotomy Patients: A Randomized Controlled Trial.
Greenberg, Steven; Murphy, Glenn S; Avram, Michael J; Shear, Torin; Benson, Jessica; Parikh, Kruti N; Patel, Aashka; Newmark, Rebecca; Patel, Vimal; Bailes, Julian; Szokol, Joseph W
2018-01-01
To determine whether opioids during the first 24 postoperative hours were significantly altered when receiving intravenous (IV) acetaminophen during that time compared with those receiving placebo (normal saline). One hundred forty patients undergoing any type of craniotomy were randomly assigned to receive either 1 g of IV acetaminophen or placebo upon surgical closure, and every 6 hours thereafter, up to 18 hours postoperatively. Analgesic requirements for the first 24 postoperative hours were recorded. Time to rescue medications in the postanesthesia care unit (PACU)/intensive care unit (ICU), amount of rescue medication, ICU and hospital lengths of stay, number of successful neurological examinations, sedation, delirium, satisfaction, and visual analog scale pain scores were also recorded. Compared with the placebo group, more patients in the IV acetaminophen group (10/66 [15.2%] vs. 4/65 [6.2%] in the placebo group) did not require opioids within the first 24 postoperative hours, but this did not reach significance (odds ratio, -9.0%, 95% confidence interval -20.5% to 1.8%; P = 0.166). Both groups had similar times to rescue medications, amounts of rescue medications, ICU and hospital lengths of stay, numbers of successful neurological examinations, sedation, delirium, satisfaction scores, visual analog scale pain scores, and temperatures within the first 24 postoperative hours. The opioid requirements within the first 24 postoperative hours were similar in the placebo and acetaminophen groups. This study is informative for the design and planning of future studies investigating the management of postoperative pain in patients undergoing craniotomies. Copyright © 2017 Elsevier Inc. All rights reserved.
15 CFR 970.514 - Scale requiring application procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Scale requiring application procedures. 970.514 Section 970.514 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade... § 970.514 Scale requiring application procedures. (a) A proposal by the Administrator to modify a term...
Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, M
2006-12-12
ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less
Bridges to sustainable tropical health
Singer, Burton H.; de Castro, Marcia Caldas
2007-01-01
Ensuring sustainable health in the tropics will require bridge building between communities that currently have a limited track record of interaction. It will also require new organizational innovation if much of the negative health consequences of large-scale economic development projects are to be equitably mitigated, if not prevented. We focus attention on three specific contexts: (i) forging linkages between the engineering and health communities to implement clean water and sanitation on a broad scale to prevent reworming, after the current deworming-only programs, of people by diverse intestinal parasites; (ii) building integrated human and animal disease surveillance infrastructure and technical capacity in tropical countries on the reporting and scientific evidence requirements of the sanitary and phytosanitary agreement under the World Trade Organization; and (iii) developing an independent and equitable organizational structure for health impact assessments as well as monitoring and mitigation of health consequences of economic development projects. Effective global disease surveillance and timely early warning of new outbreaks will require a far closer integration of veterinary and human medicine than heretofore. Many of the necessary surveillance components exist within separate animal- and human-oriented organizations. The challenge is to build the necessary bridges between them. PMID:17913894
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
Motion Planning of Two Stacker Cranes in a Large-Scale Automated Storage/Retrieval System
NASA Astrophysics Data System (ADS)
Kung, Yiheng; Kobayashi, Yoshimasa; Higashi, Toshimitsu; Ota, Jun
We propose a method for reducing the computational time of motion planning for stacker cranes. Most automated storage/retrieval systems (AS/RSs) are only equipped with one stacker crane. However, this is logistically challenging, and greater work efficiency in warehouses, such as those using two stacker cranes, is required. In this paper, a warehouse with two stacker cranes working simultaneously is proposed. Unlike warehouses with only one crane, trajectory planning in those with two cranes is very difficult. Since there are two cranes working together, a proper trajectory must be considered to avoid collision. However, verifying collisions is complicated and requires a considerable amount of computational time. As transport work in AS/RSs occurs randomly, motion planning cannot be conducted in advance. Planning an appropriate trajectory within a restricted duration would be a difficult task. We thereby address the current problem of motion planning requiring extensive calculation time. As a solution, we propose a “free-step” to simplify the procedure of collision verification and reduce the computational time. On the other hand, we proposed a method to reschedule the order of collision verification in order to find an appropriate trajectory in less time. By the proposed method, we reduce the calculation time to less than 1/300 of that achieved in former research.
Recent Trends in Local-Scale Marine Biodiversity Reflect Community Structure and Human Impacts.
Elahi, Robin; O'Connor, Mary I; Byrnes, Jarrett E K; Dunic, Jillian; Eriksson, Britas Klemens; Hensel, Marc J S; Kearns, Patrick J
2015-07-20
The modern biodiversity crisis reflects global extinctions and local introductions. Human activities have dramatically altered rates and scales of processes that regulate biodiversity at local scales. Reconciling the threat of global biodiversity loss with recent evidence of stability at fine spatial scales is a major challenge and requires a nuanced approach to biodiversity change that integrates ecological understanding. With a new dataset of 471 diversity time series spanning from 1962 to 2015 from marine coastal ecosystems, we tested (1) whether biodiversity changed at local scales in recent decades, and (2) whether we can ignore ecological context (e.g., proximate human impacts, trophic level, spatial scale) and still make informative inferences regarding local change. We detected a predominant signal of increasing species richness in coastal systems since 1962 in our dataset, though net species loss was associated with localized effects of anthropogenic impacts. Our geographically extensive dataset is unlikely to be a random sample of marine coastal habitats; impacted sites (3% of our time series) were underrepresented relative to their global presence. These local-scale patterns do not contradict the prospect of accelerating global extinctions but are consistent with local species loss in areas with direct human impacts and increases in diversity due to invasions and range expansions in lower impact areas. Attempts to detect and understand local biodiversity trends are incomplete without information on local human activities and ecological context. Copyright © 2015 Elsevier Ltd. All rights reserved.
Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks
NASA Astrophysics Data System (ADS)
Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M.
2017-12-01
In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices.
All-fibre photonic signal generator for attosecond timing and ultralow-noise microwave
Jung, Kwangyun; Kim, Jungwon
2015-01-01
High-impact frequency comb applications that are critically dependent on precise pulse timing (i.e., repetition rate) have recently emerged and include the synchronization of X-ray free-electron lasers, photonic analogue-to-digital conversion and photonic radar systems. These applications have used attosecond-level timing jitter of free-running mode-locked lasers on a fast time scale within ~100 μs. Maintaining attosecond-level absolute jitter over a significantly longer time scale can dramatically improve many high-precision comb applications. To date, ultrahigh quality-factor (Q) optical resonators have been used to achieve the highest-level repetition-rate stabilization of mode-locked lasers. However, ultrahigh-Q optical-resonator-based methods are often fragile, alignment sensitive and complex, which limits their widespread use. Here we demonstrate a fibre-delay line-based repetition-rate stabilization method that enables the all-fibre photonic generation of optical pulse trains with 980-as (20-fs) absolute r.m.s. timing jitter accumulated over 0.01 s (1 s). This simple approach is based on standard off-the-shelf fibre components and can therefore be readily used in various comb applications that require ultra-stable microwave frequency and attosecond optical timing. PMID:26531777
NASA Astrophysics Data System (ADS)
Bylaska, E. J.; Kowalski, K.; Apra, E.; Govind, N.; Valiev, M.
2017-12-01
Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biogeochemical systems, mineral/fluid systems, nanoparticles, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. The well-developed tools of quantum chemistry and physics have been shown to approach the accuracy required. However, despite the continuous effort being put into improving their accuracy and efficiency, these tools will be of little value to condensed matter problems without continued improvements in techniques to traverse and sample the high-dimensional phase space needed to span the ˜10^12 time scale differences between molecular simulation and chemical events. In recent years, we have made considerable progress in developing electronic structure and AIMD methods tailored to treat biochemical and geochemical problems, including very efficient implementations of many-body methods, fast exact exchange methods, electron-transfer methods, excited state methods, QM/MM, and new parallel algorithms that scale to +100,000 cores. The poster will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application to complex biogeochemical systems.
Imaging the Subsurface of the Thuringian Basin (Germany) on Different Spatial Scales
NASA Astrophysics Data System (ADS)
Goepel, A.; Krause, M.; Methe, P.; Kukowski, N.
2014-12-01
Understanding the coupled dynamics of near surface and deep fluid flow patterns is essential to characterize the properties of sedimentary basins, to identify the processes of compaction, diagenesis, and transport of mass and energy. The multidisciplinary project INFLUINS (Integrated FLUid dynamics IN Sedimentary basins) aims for investigating the behavior of fluids in the Thuringian Basin, a small intra-continental sedimentary basin in Germany, at different spatial scales, ranging from the pore scale to the extent of the entire basin. As hydraulic properties often significantly vary with spatial scales, e.g. seismic data using different frequencies are required to gain information about the spatial variability of elastic and hydraulic subsurface properties. For the Thuringian Basin, we use seismic and borehole data acquired in the framework of INFLUINS. Basin-wide structural imaging data are available from 2D reflection seismic profiles as well as 2.5D and 3D seismic travel time tomography. Further, core material from a 1,179 m deep drill hole completed in 2013 is available for laboratory seismic experiments on mm- to cm-scale. The data are complemented with logging data along the entire drill hole. This campaign yielded e.g. sonic and density logs allowing the estimation of in-situ P-velocity and acoustic impedance with a spatial resolution on the cm-scale and provides improved information about petrologic and stratigraphic variability at different scales. Joint interpretation of basin scale structural and elastic properties data with laboratory scale data from ultrasound experiments using core samples enables a detailed and realistic imaging of the subsurface properties on different spatial scales. Combining seismic travel time tomography with stratigraphic interpretation provides useful information of variations in the elastic properties for certain geological units and therefore gives indications for changes in hydraulic properties.
Event Horizon Telescope observations as probes for quantum structure of astrophysical black holes
NASA Astrophysics Data System (ADS)
Giddings, Steven B.; Psaltis, Dimitrios
2018-04-01
The need for a consistent quantum evolution for black holes has led to proposals that their semiclassical description is modified not just near the singularity, but at horizon or larger scales. If such modifications extend beyond the horizon, they influence regions accessible to distant observation. Natural candidates for these modifications behave like metric fluctuations, with characteristic length scales and timescales set by the horizon radius. We investigate the possibility of using the Event Horizon Telescope to observe these effects, if they have a strength sufficient to make quantum evolution consistent with unitarity, without introducing new scales. We find that such quantum fluctuations can introduce a strong time dependence for the shape and size of the shadow that a black hole casts on its surrounding emission. For the black hole in the center of the Milky Way, detecting the rapid time variability of its shadow will require nonimaging timing techniques. However, for the much larger black hole in the center of the M87 galaxy, a variable black-hole shadow, if present with these parameters, would be readily observable in the individual snapshots that will be obtained by the Event Horizon Telescope.
Unsupervised learning on scientific ocean drilling datasets from the South China Sea
NASA Astrophysics Data System (ADS)
Tse, Kevin C.; Chiu, Hon-Chim; Tsang, Man-Yin; Li, Yiliang; Lam, Edmund Y.
2018-06-01
Unsupervised learning methods were applied to explore data patterns in multivariate geophysical datasets collected from ocean floor sediment core samples coming from scientific ocean drilling in the South China Sea. Compared to studies on similar datasets, but using supervised learning methods which are designed to make predictions based on sample training data, unsupervised learning methods require no a priori information and focus only on the input data. In this study, popular unsupervised learning methods including K-means, self-organizing maps, hierarchical clustering and random forest were coupled with different distance metrics to form exploratory data clusters. The resulting data clusters were externally validated with lithologic units and geologic time scales assigned to the datasets by conventional methods. Compact and connected data clusters displayed varying degrees of correspondence with existing classification by lithologic units and geologic time scales. K-means and self-organizing maps were observed to perform better with lithologic units while random forest corresponded best with geologic time scales. This study sets a pioneering example of how unsupervised machine learning methods can be used as an automatic processing tool for the increasingly high volume of scientific ocean drilling data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
A new paradigm for atomically detailed simulations of kinetics in biophysical systems.
Elber, Ron
2017-01-01
The kinetics of biochemical and biophysical events determined the course of life processes and attracted considerable interest and research. For example, modeling of biological networks and cellular responses relies on the availability of information on rate coefficients. Atomically detailed simulations hold the promise of supplementing experimental data to obtain a more complete kinetic picture. However, simulations at biological time scales are challenging. Typical computer resources are insufficient to provide the ensemble of trajectories at the correct length that is required for straightforward calculations of time scales. In the last years, new technologies emerged that make atomically detailed simulations of rate coefficients possible. Instead of computing complete trajectories from reactants to products, these approaches launch a large number of short trajectories at different positions. Since the trajectories are short, they are computed trivially in parallel on modern computer architecture. The starting and termination positions of the short trajectories are chosen, following statistical mechanics theory, to enhance efficiency. These trajectories are analyzed. The analysis produces accurate estimates of time scales as long as hours. The theory of Milestoning that exploits the use of short trajectories is discussed, and several applications are described.
RNA–protein binding kinetics in an automated microfluidic reactor
Ridgeway, William K.; Seitaridou, Effrosyni; Phillips, Rob; Williamson, James R.
2009-01-01
Microfluidic chips can automate biochemical assays on the nanoliter scale, which is of considerable utility for RNA–protein binding reactions that would otherwise require large quantities of proteins. Unfortunately, complex reactions involving multiple reactants cannot be prepared in current microfluidic mixer designs, nor is investigation of long-time scale reactions possible. Here, a microfluidic ‘Riboreactor’ has been designed and constructed to facilitate the study of kinetics of RNA–protein complex formation over long time scales. With computer automation, the reactor can prepare binding reactions from any combination of eight reagents, and is optimized to monitor long reaction times. By integrating a two-photon microscope into the microfluidic platform, 5-nl reactions can be observed for longer than 1000 s with single-molecule sensitivity and negligible photobleaching. Using the Riboreactor, RNA–protein binding reactions with a fragment of the bacterial 30S ribosome were prepared in a fully automated fashion and binding rates were consistent with rates obtained from conventional assays. The microfluidic chip successfully combines automation, low sample consumption, ultra-sensitive fluorescence detection and a high degree of reproducibility. The chip should be able to probe complex reaction networks describing the assembly of large multicomponent RNPs such as the ribosome. PMID:19759214
Use of ruthenium dyes for subnanosecond detector fidelity testing in real time transient absorption
NASA Astrophysics Data System (ADS)
Byrdin, Martin; Thiagarajan, Viruthachalam; Villette, Sandrine; Espagne, Agathe; Brettel, Klaus
2009-04-01
Transient absorption spectroscopy is a powerful tool for the study of photoreactions on time scales from femtoseconds to seconds. Typically, reactions slower than ˜1 ns are recorded by the "classical" technique; the reaction is triggered by an excitation flash, and absorption changes accompanying the reaction are recorded in real time using a continuous monitoring light beam and a detection system with sufficiently fast response. The pico- and femtosecond region can be accessed by the more recent "pump-probe" technique, which circumvents the difficulties of real time detection on a subnanosecond time scale. This is paid for by accumulation of an excessively large number of shots to sample the reaction kinetics. Hence, it is of interest to extend the classical real time technique as far as possible to the subnanosecond range. In order to identify and minimize detection artifacts common on a subnanosecond scale, like overshoot, ringing, and signal reflections, rigorous testing is required of how the detection system responds to fast changes of the monitoring light intensity. Here, we introduce a novel method to create standard signals for detector fidelity testing on a time scale from a few picoseconds to tens of nanoseconds. The signals result from polarized measurements of absorption changes upon excitation of ruthenium complexes {[Ru(bpy)3]2+ and a less symmetric derivative} by a short laser flash. Two types of signals can be created depending on the polarization of the monitoring light with respect to that of the excitation flash: a fast steplike bleaching at magic angle and a monoexponentially decaying bleaching for parallel polarizations. The lifetime of the decay can be easily varied via temperature and viscosity of the solvent. The method is applied to test the performance of a newly developed real time transient absorption setup with 300 ps time resolution and high sensitivity.
Ascione, Marco; Bargigli, Silvia; Campanella, Luigi; Ulgiati, Sergio
2011-05-23
The material, energy and environmental flows supporting the growth and welfare of the city of Rome, during a recent forty-year period (from 1962 to 2002) were investigated in order to understand the resource basis of its present welfare and lifestyle. The study focused on the local scale of the urban system (resources actually used within the system's boundary) as well as on the larger regional and national scales where resources come from. Assessing the resource use change over time allowed to understand what are the main drivers of lifestyle changes of the local population. In particular, while the direct, local-scale use of the main material and energy resources exhibits a quadratic growth over time, the total (direct+indirect) consumption on the scale of the global economy is always 3-4 times higher, is so highlighting how much of a city's growth depends on economic and production activities that develop outside of its boundaries. Water use shows an even more alarming trend, in that the indirect consumption grows much faster, suggesting a shift from the use of a less water-intensive mix of products to a different mix that requires much more water in its industrial production. Such trend calls for increased awareness of the water footprint of goods used as well as increased efficiency in water management by both industries and households. The evolution of resource use and standard of living also affects the release of airborne emissions, an issue that is becoming crucial due to concerns for climate change and urban air pollution. The extent of such additional environmental burden is also explored in the present paper. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Yihan; Kuse, Naoya; Fermann, Martin
2017-08-07
A high-speed ultra-wideband microwave spectral scanning system is proposed and experimentally demonstrated. Utilizing coherent dual electro-optical frequency combs and a recirculating optical frequency shifter, the proposed system realizes wavelength- and time-division multiplexing at the same time, offering flexibility between scan speed and size, weight and power requirements (SWaP). High-speed spectral scanning spanning from ~1 to 8 GHz with ~1.2 MHz spectral resolution is achieved experimentally within 14 µs. The system can be easily scaled to higher bandwidth coverage, faster scanning speed or finer spectral resolution with suitable hardware.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
NASA Astrophysics Data System (ADS)
Mishra, Hiranmaya; Mohanty, Subhendra; Nautiyal, Akhilesh
2012-04-01
In warm inflation models there is the requirement of generating large dissipative couplings of the inflaton with radiation, while at the same time, not de-stabilising the flatness of the inflaton potential due to radiative corrections. One way to achieve this without fine tuning unrelated couplings is by supersymmetry. In this Letter we show that if the inflaton and other light fields are pseudo-Nambu-Goldstone bosons then the radiative corrections to the potential are suppressed and the thermal corrections are small as long as the temperature is below the symmetry breaking scale. In such models it is possible to fulfil the contrary requirements of an inflaton potential which is stable under radiative corrections and the generation of a large dissipative coupling of the inflaton field with other light fields. We construct a warm inflation model which gives the observed CMB-anisotropy amplitude and spectral index where the symmetry breaking is at the GUT scale.
Results of Long Term Life Tests of Large Scale Lithium-Ion Cells
NASA Astrophysics Data System (ADS)
Inoue, Takefumi; Imamura, Nobutaka; Miyanaga, Naozumi; Yoshida, Hiroaki; Komada, Kanemi
2008-09-01
High energy density Li-ion cells have been introduced to latest satellites and another space usage. We have started development of large scale Li-ion cells for space applications in 1997. The chemical design was fixed in 1999.It is very important to confirm life performance to apply satellite applications because it requires long mission life such as 15 years for GEO and 5 to 7 years for LEO. Therefore we started life test at various conditions. And the tests have reached 8 to 9 years in actual calendar time. Semi - accelerated GEO tests which gives both calendar and cycle loss have been reached 42 season that corresponds 21 years in orbit. The specific energy range is 120 - 130 Wh/kg at EOL. According to the test results, we have confirmed that our Li-ion cell meets general requirements for space application such as GEO and LEO with quite high specific energy.
Verification of watershed vegetation restoration policies, arid China
Zhang, Chengqi; Li, Yu
2016-01-01
Verification of restoration policies that have been implemented is of significance to simultaneously reduce global environmental risks while also meeting economic development goals. This paper proposed a novel method according to the idea of multiple time scales to verify ecological restoration policies in the Shiyang River drainage basin, arid China. We integrated modern pollen transport characteristics of the entire basin and pollen records from 8 Holocene sedimentary sections, and quantitatively reconstructed the millennial-scale changes of watershed vegetation zones by defining a new pollen-precipitation index. Meanwhile, Empirical Orthogonal Function method was used to quantitatively analyze spatial and temporal variations of Normalized Difference Vegetation Index in summer (June to August) of 2000–2014. By contrasting the vegetation changes that mainly controlled by millennial-scale natural ecological evolution with that under conditions of modern ecological restoration measures, we found that vegetation changes of the entire Shiyang River drainage basin are synchronous in both two time scales, and the current ecological restoration policies met the requirements of long-term restoration objectives and showed promising early results on ecological environmental restoration. Our findings present an innovative method to verify river ecological restoration policies, and also provide the scientific basis to propose future emphasizes of ecological restoration strategies. PMID:27470948
A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.
Halloran, John T; Rocke, David M
2018-05-04
Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .
Verification of watershed vegetation restoration policies, arid China
NASA Astrophysics Data System (ADS)
Zhang, Chengqi; Li, Yu
2016-07-01
Verification of restoration policies that have been implemented is of significance to simultaneously reduce global environmental risks while also meeting economic development goals. This paper proposed a novel method according to the idea of multiple time scales to verify ecological restoration policies in the Shiyang River drainage basin, arid China. We integrated modern pollen transport characteristics of the entire basin and pollen records from 8 Holocene sedimentary sections, and quantitatively reconstructed the millennial-scale changes of watershed vegetation zones by defining a new pollen-precipitation index. Meanwhile, Empirical Orthogonal Function method was used to quantitatively analyze spatial and temporal variations of Normalized Difference Vegetation Index in summer (June to August) of 2000-2014. By contrasting the vegetation changes that mainly controlled by millennial-scale natural ecological evolution with that under conditions of modern ecological restoration measures, we found that vegetation changes of the entire Shiyang River drainage basin are synchronous in both two time scales, and the current ecological restoration policies met the requirements of long-term restoration objectives and showed promising early results on ecological environmental restoration. Our findings present an innovative method to verify river ecological restoration policies, and also provide the scientific basis to propose future emphasizes of ecological restoration strategies.
NASA Astrophysics Data System (ADS)
Mandt, Kathleen; Waite, J. Hunter, Jr.; Bell, Jared; Mousis, Olivier
2010-04-01
Current isotopic ratios in planetary atmospheres have played an important role in determining how that atmosphere has evolved over geologic time scales (e.g. Donahue et al. 1997, Lunine et al. 1999). The current 12C/13C ratio in methane is a particularly useful indicator of Titan's atmospheric evolutionary history (Mandt et al. 2009). Primordial 12C/13C ratios throughout the solar system are limited to 89.01+4.45-2.67. (Alexander et al. 2007, Martins et al. 2008), while the methane 12C/13C ratio measured by GCMS and CIRS are 82.3+/-1.0 and 76.6+/-2.7 respectively (Niemann et al. 2005, Nixon et al. 2008). This is well below the primordial range, suggesting fractionation of the isotopes by atmospheric processes. A number of atmospheric mass loss processes can fractionate the isotopes over geologic time scales. Photochemistry and escape are of particular importance (Donahue et al 1997, Mandt et al. 2009). Measurements of the 12C/13C ratios in C2 hydrocarbons show evidence of fractionation due to photochemistry (Nixon et al. 2008) that is most likely due to a kinetic isotope effect (KIE). A KIE is a mildly efficient fractionating process in which reactions involving 12C occur 1.04 times faster than reactions involving 13C. A moderate time scale, on the order of 50 to 400 million years, is required to change the 12C/13C ratio of the atmospheric methane inventory. The exact length of this time scale depends directly on the methane photochemical loss rate. Titan's photochemistry is extremely complex, and although the total photochemical loss rate is photon-limited (Lorenz et al. 1997), the literature provides a range of loss rates between 4.9 x 10^9 cm-2s-1 (Wilson and Atreya 2004) and 3.4 x 10^10 cm-2s-1 (Lebonnois et al. 2003). This range can alter the time scale for fractionation in the carbon isotopes by as much as a factor of 8. INMS measurements of the methane 12C/13C ratio in the upper atmosphere show that atmospheric escape is a more efficient fractionating process than photochemistry (Mandt et al. 2009). The literature provides a range of possible values for the methane escape rates that depend on the input parameters to upper atmospheric models (Bell et al. 2010). The escape rate of methane could be as little as 2.75 x 10^7 cm-2s-1 (de la Haye et al. 2007) or as great as 3.0 x 10^9 cm-2s-1 (Yelle et al. 2008). This range of loss rates can alter the time scale for fractionation by as much as a factor of 5. Although the photochemical fractionation is less efficient than the escape rate, variance in its value has a greater impact on the time required to fractionate the isotopes because the magnitude of the photochemical loss is much greater than that of the escape rate. Thus, a better quantification of both mass loss rates is key to understanding the evolutionary history of Titan's atmosphere.
System-level power optimization for real-time distributed embedded systems
NASA Astrophysics Data System (ADS)
Luo, Jiong
Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.
[A large-scale accident in Alpine terrain].
Wildner, M; Paal, P
2015-02-01
Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.
NASA Astrophysics Data System (ADS)
Darmenova, K.; Higgins, G.; Kiley, H.; Apling, D.
2010-12-01
Current General Circulation Models (GCMs) provide a valuable estimate of both natural and anthropogenic climate changes and variability on global scales. At the same time, future climate projections calculated with GCMs are not of sufficient spatial resolution to address regional needs. Many climate impact models require information at scales of 50 km or less, so dynamical downscaling is often used to estimate the smaller-scale information based on larger scale GCM output. To address current deficiencies in local planning and decision making with respect to regional climate change, our research is focused on performing a dynamical downscaling with the Weather Research and Forecasting (WRF) model and developing decision aids that translate the regional climate data into actionable information for users. Our methodology involves development of climatological indices of extreme weather and heating/cooling degree days based on WRF ensemble runs initialized with the NCEP-NCAR reanalysis and the European Center/Hamburg Model (ECHAM5). Results indicate that the downscale simulations provide the necessary detailed output required by state and local governments and the private sector to develop climate adaptation plans. In addition we evaluated the WRF performance in long-term climate simulations over the Southwestern US and validated against observational datasets.
Generalization Technique for 2D+SCALE Dhe Data Model
NASA Astrophysics Data System (ADS)
Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.
Scaling of size distributions of C60 and C70 fullerene surface islands
NASA Astrophysics Data System (ADS)
Dubrovskii, V. G.; Berdnikov, Y.; Olyanich, D. A.; Mararov, V. V.; Utas, T. V.; Zotov, A. V.; Saranin, A. A.
2017-06-01
We present experimental data and a theoretical analysis for the size distributions of C60 and C70 surface islands deposited onto In-modified Si(111)√3 × √3-Au surface under different conditions. We show that both fullerene islands feature an analytic Vicsek-Family scaling shape where the scaled size distributions are given by a power law times an incomplete beta-function with the required normalization. The power exponent in this distribution corresponds to the fractal shape of two-dimensional islands, confirmed by the experimentally observed morphologies. Quite interestingly, we do not see any significant difference between C60 and C70 fullerenes in terms of either scaling parameters or temperature dependence of the diffusion constants. In particular, we deduce the activation energy for surface diffusion of ED = 140 ± 10 meV for both types of fullerenes.
Cotter, C. J.
2017-01-01
In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316
Imaging high-speed friction at the nanometer scale
Thorén, Per-Anders; de Wijn, Astrid S.; Borgani, Riccardo; Forchheimer, Daniel; Haviland, David B.
2016-01-01
Friction is a complicated phenomenon involving nonlinear dynamics at different length and time scales. Understanding its microscopic origin requires methods for measuring force on nanometer-scale asperities sliding at velocities reaching centimetres per second. Despite enormous advances in experimental technique, this combination of small length scale and high velocity remain elusive. We present a technique for rapidly measuring the frictional forces on a single asperity over a velocity range from zero to several centimetres per second. At each image pixel we obtain the velocity dependence of both conservative and dissipative forces, revealing the transition from stick-slip to smooth sliding friction. We explain measurements on graphite using a modified Prandtl–Tomlinson model, including the damped elastic deformation of the asperity. With its improved force sensitivity and small sliding amplitude, our method enables rapid and detailed surface mapping of the velocity dependence of frictional forces with less than 10 nm spatial resolution. PMID:27958267
Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm
NASA Astrophysics Data System (ADS)
Mathai, J.; Mujumdar, P.
2017-12-01
A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.
Real-time monitoring of CO2 storage sites: Application to Illinois Basin-Decatur Project
Picard, G.; Berard, T.; Chabora, E.; Marsteller, S.; Greenberg, S.; Finley, R.J.; Rinck, U.; Greenaway, R.; Champagnon, C.; Davard, J.
2011-01-01
Optimization of carbon dioxide (CO2) storage operations for efficiency and safety requires use of monitoring techniques and implementation of control protocols. The monitoring techniques consist of permanent sensors and tools deployed for measurement campaigns. Large amounts of data are thus generated. These data must be managed and integrated for interpretation at different time scales. A fast interpretation loop involves combining continuous measurements from permanent sensors as they are collected to enable a rapid response to detected events; a slower loop requires combining large datasets gathered over longer operational periods from all techniques. The purpose of this paper is twofold. First, it presents an analysis of the monitoring objectives to be performed in the slow and fast interpretation loops. Second, it describes the implementation of the fast interpretation loop with a real-time monitoring system at the Illinois Basin-Decatur Project (IBDP) in Illinois, USA. ?? 2011 Published by Elsevier Ltd.
Computer simulations and real-time control of ELT AO systems using graphical processing units
NASA Astrophysics Data System (ADS)
Wang, Lianqi; Ellerbroek, Brent
2012-07-01
The adaptive optics (AO) simulations at the Thirty Meter Telescope (TMT) have been carried out using the efficient, C based multi-threaded adaptive optics simulator (MAOS, http://github.com/lianqiw/maos). By porting time-critical parts of MAOS to graphical processing units (GPU) using NVIDIA CUDA technology, we achieved a 10 fold speed up for each GTX 580 GPU used compared to a modern quad core CPU. Each time step of full scale end to end simulation for the TMT narrow field infrared AO system (NFIRAOS) takes only 0.11 second in a desktop with two GTX 580s. We also demonstrate that the TMT minimum variance reconstructor can be assembled in matrix vector multiply (MVM) format in 8 seconds with 8 GTX 580 GPUs, meeting the TMT requirement for updating the reconstructor. Analysis show that it is also possible to apply the MVM using 8 GTX 580s within the required latency.
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
Switching between simple cognitive tasks: the interaction of top-down and bottom-up factors
NASA Technical Reports Server (NTRS)
Ruthruff, E.; Remington, R. W.; Johnston, J. C.
2001-01-01
How do top-down factors (e.g., task expectancy) and bottom-up factors (e.g., task recency) interact to produce an overall level of task readiness? This question was addressed by factorially manipulating task expectancy and task repetition in a task-switching paradigm. The effects of expectancy and repetition on response time tended to interact underadditively, but only because the traditional binary task-repetition variable lumps together all switch trials, ignoring variation in task lag. When the task-recency variable was scaled continuously, all 4 experiments instead showed additivity between expectancy and recency. The results indicated that expectancy and recency influence different stages of mental processing. One specific possibility (the configuration-execution model) is that task expectancy affects the time required to configure upcoming central operations, whereas task recency affects the time required to actually execute those central operations.
A transient FETI methodology for large-scale parallel implicit computations in structural mechanics
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Crivelli, Luis; Roux, Francois-Xavier
1992-01-01
Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because explicit schemes are also easier to parallelize than implicit ones. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet -- and perhaps will never -- be offset by the speed of parallel hardware. Therefore, it is essential to develop efficient and robust alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating low-frequency dynamics. Here we present a domain decomposition method for implicit schemes that requires significantly less storage than factorization algorithms, that is several times faster than other popular direct and iterative methods, that can be easily implemented on both shared and local memory parallel processors, and that is both computationally and communication-wise efficient. The proposed transient domain decomposition method is an extension of the method of Finite Element Tearing and Interconnecting (FETI) developed by Farhat and Roux for the solution of static problems. Serial and parallel performance results on the CRAY Y-MP/8 and the iPSC-860/128 systems are reported and analyzed for realistic structural dynamics problems. These results establish the superiority of the FETI method over both the serial/parallel conjugate gradient algorithm with diagonal scaling and the serial/parallel direct method, and contrast the computational power of the iPSC-860/128 parallel processor with that of the CRAY Y-MP/8 system.
Blackwood, Jaime; Duff, Jonathan P; Nettel-Aguirre, Alberto; Djogovic, Dennis; Joynt, Chloe
2014-05-01
The effect of teaching crisis resource management skills on the resuscitation performance of pediatric residents is unknown. The primary objective of this pilot study was to determine if teaching crisis resource management to residents leads to improved clinical and crisis resource management performance in simulated pediatric resuscitation scenarios. A prospective, randomized control pilot study. Simulation facility at tertiary pediatric hospital. Junior pediatric residents. Junior pediatric residents were randomized to 1 hour of crisis resource management instruction or no additional training. Time to predetermined resuscitation tasks was noted in simulated resuscitation scenarios immediately after intervention and again 3 months post intervention. Crisis resource management skills were evaluated using the Ottawa Global Rating Scale. Fifteen junior residents participated in the study, of which seven in the intervention group. The intervention crisis resource management group placed monitor leads 24.6 seconds earlier (p = 0.02), placed an IV 47.1 seconds sooner (p = 0.04), called for help 50.4 seconds faster (p = 0.03), and checked for a pulse after noticing a rhythm change 84.9 seconds quicker (p = 0.01). There was no statistically significant difference in time to initiation of cardiopulmonary resuscitation (p = 0.264). The intervention group had overall crisis resource management performance scores 1.15 points higher (Ottawa Global Rating Scale [out of 7]) (p = 0.02). Three months later, these differences between the groups persisted. A 1-hour crisis resource management teaching session improved time to critical initial steps of pediatric resuscitation and crisis resource management performance as measured by the Ottawa Global Rating Scale. The control group did not develop these crisis resource management skills over 3 months of standard training indicating that obtaining these skills requires specific education. Larger studies of crisis resource education are required.
Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton
2018-03-13
The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
Fitts’ Law in Early Postural Adjustments
Bertucco, M.; Cesari, P.; Latash, M.L
2012-01-01
We tested a hypothesis that the classical relation between movement time and index of difficulty (ID) in quick pointing action (Fitts’ Law) reflects processes at the level of motor planning. Healthy subjects stood on a force platform and performed quick and accurate hand movements into targets of different size located at two distances. The movements were associated with early postural adjustments that are assumed to reflect motor planning processes. The short distance did not require trunk rotation, while the long distance did. As a result, movements over the long distance were associated with substantiual Coriolis forces. Movement kinematics and contact forces and moments recorded by the platform were studied. Movement time scaled with ID for both movements. However, the data could not be fitted with a single regression: Movements over the long distance had a larger intercept corresponding to movement times about 140 ms longer than movements over the shorter distance. The magnitude of postural adjustments prior to movement initiation scaled with ID for both short and long distances. Our results provide strong support for the hypothesis that Fitts’ Law emerges at the level of motor planning, not at the level of corrections of ongoing movements. They show that, during natural movements, changes in movement distance may lead to changes in the relation between movement time and ID, for example when the contribution of different body segments to the movement varies and when the action of Coriolis force may require an additional correction of the movement trajectory. PMID:23211560
Proposed Approach to Stable High Beta Plasmas in ET
NASA Astrophysics Data System (ADS)
Taylor, R. J.; Carter, T. A.; Gauvreau, J.-L.; Gourdain, P.-A.; Grossman, A.; Lafonteese, D. J.; Pace, D. C.; Schmitz, L. W.
2003-10-01
Five second long plasmas have been produced in ET with ease. We need these long pulses to evolve high beta equilibria under controlled conditions. However, equilibrium control is lost to internal disruptions due to the development of giant sawteeth on the 1 second time scale. This time scale is approximately the central energy confinement time, while the central particle confinement time is much longer than 1 second. This persistent limitation is present in ohmic and ICRF heated discharges. MHD stable current profiles have been found using DCON(A.H. Glasser, private communication) but transport related phenomena like giant sawteeth and uncontrolled transport barrier evolution are not yet part of a simple stability study. We are advocating avoiding the evolution of giant sawtooth and conditions responsible for MHD instabilities as opposed to exploring their stabilization. This is equivalent to the statement that self-organized plasmas are in fact not welcome in long pulse tokamaks. We intend to prevent self-organization by the application of a multi-faceted ICRF strategy. The in house technology is ready but the approach needs to be artful and not preconceived. The flexibility built into the ET hardware is likely to help us to find a way to achieve global plasma control. It is essential that this work be pursued geared towards parameter performance and configuration control. Both require a significant commitment to understanding the device physics AND delivering on the engineering required for control and performance.
Türk, Hacer Şebnem; Aydoğmuş, Meltem; Ünsal, Oya; Işıl, Canan Tülay; Citgez, Bülent; Oba, Sibel; Açık, Mehmet Eren
2014-12-01
Different drug combinations are used for sedation in colonoscopy procedures. A ketamine-propofol (ketofol) mixture provides effective sedation and has minimal adverse effects. Alfentanil also provides anesthesia for short surgical procedures by incremental injection as an adjunct. However, no study has investigated the use of ketofol compared with an opioid-propofol combination in colonoscopic procedures. A total of 70 patients, ASA physical status I-II, scheduled to undergo elective colonoscopy, were enrolled in this prospective randomized study and allocated to two groups. After premedication, sedation induction was performed with 0.5 mg/kg ketamine +1 mg/kg propofol in Group KP, and 10 mg/kg alfentanil +1 mg/kg propofol in Group AP. Propofol was added when required. Demographic data, colonoscopy duration, recovery time, discharge time, mean arterial pressure (MAP), heart rate (HR), peripheral oxygen saturation, Ramsey Sedation Scale values, colonoscopy patients' satisfaction scores, and complications were recorded. The need for additional propofol doses was significantly higher in Group AP than in Group KP. MAP at minute 1 and 5, Ramsey Sedation Scale at minute 5, and discharge time were significantly higher in Group KP than in Group AP. Additional propofol doses and total propofol dose were significantly lower in Group KP than in Group AP. Ketofol provided better hemodynamic stability and quality of sedation compared with alfentanil-propofol combination in elective colonoscopy, and required fewer additional propofol; however, it prolonged discharge time. Both combinations can safely be used in colonoscopy sedation.
Development of a time sensitivity score for frequently occurring motor vehicle crash injuries.
Schoell, Samantha L; Doud, Andrea N; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Martin, R Shayn; Meredith, J Wayne; Stitzel, Joel D
2015-03-01
Injury severity alone is a poor indicator of the time sensitivity of injuries. The purpose of the study was to quantify the urgency with which the most frequent motor vehicle crash injuries require treatment, according to expert physicians. The time sensitivity was quantified for the top 95% most frequently occurring Abbreviated Injury Scale (AIS) 2+ injuries in the National Automotive Sampling System-Crashworthiness Data System (NASS-CDS) 2000-2011. A Time Sensitivity Score was developed using expert physician survey data in which physicians were asked to determine whether a particular injury should go to a Level I/II trauma center and the urgency with which that injury required treatment. When stratifying by AIS severity, the mean Time Sensitivity Score increased with increasing AIS severity. The mean Time Sensitivity Scores by AIS severity were as follows: 0.50 (AIS 2); 0.78 (AIS 3); 0.92 (AIS 4); 0.97 (AIS 5); and 0.97 (AIS 6). When stratifying by anatomical region, the head, thorax, and abdomen were the most time sensitive. Appropriate triage depends on multiple factors, including the severity of an injury, the urgency with which it requires treatment, and the propensity of a significant injury to be missed. The Time Sensitivity Score did not correlate highly with the widely used AIS severity scores, which highlights the inability of AIS scores to capture all aspects of injury severity. The Time Sensitivity Score can be useful in Advanced Automatic Crash Notification systems for identifying highly time sensitive injuries in motor vehicle crashes requiring prompt treatment at a trauma center. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet
ERIC Educational Resources Information Center
Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.
2006-01-01
Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…
ERIC Educational Resources Information Center
Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima
2009-01-01
A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…