Lee, Yi-Hsuan; von Davier, Alina A
2013-07-01
Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.
Prediction of Time Response of Electrowetting
NASA Astrophysics Data System (ADS)
Lee, Seung Jun; Hong, Jiwoo; Kang, Kwan Hyoung
2009-11-01
It is very important to predict the time response of electrowetting-based devices, such as liquid lenses, reflective displays, and optical switches. We investigated the time response of electrowetting, based on an analytical and a numerical method, to find out characteristic scales and a scaling law for the switching time. For this, spreading process of a sessile droplet was analyzed based on the domain perturbation method. First, we considered the case of weakly viscous fluids. The analytical result for the spreading process was compared with experimental results, which showed very good agreement in overall time response. It was shown that the overall dynamics is governed by P2 shape mode. We derived characteristic scales combining the droplet volume, density, and surface tension. The overall dynamic process was scaled quite well by the scales. A scaling law was derived from the analytical solution and was verified experimentally. We also suggest a scaling law for highly viscous liquids, based on results of numerical analysis for the electrowetting-actuated spreading process.
Russian national time scale long-term stability
NASA Astrophysics Data System (ADS)
Alshina, A. P.; Gaigerov, B. A.; Koshelyaevsky, N. B.; Pushkin, S. B.
1994-05-01
The Institute of Metrology for Time and Space NPO 'VNIIFTRI' generates the National Time Scale (NTS) of Russia -- one of the most stable time scales in the world. Its striking feature is that it is based on a free ensemble of H-masers only. During last two years the estimations of NTS longterm stability based only on H-maser intercomparison data gives a flicker floor of about (2 to 3) x 10(exp -15) for averaging times from 1 day to 1 month. Perhaps the most significant feature for a time laboratory is an extremely low possible frequency drift -- it is too difficult to estimate it reliably. The other estimations, free from possible inside the ensemble correlation phenomena, are available based on the time comparison of NTS relative to the stable enough time scale of outer laboratories. The data on NTS comparison relative to the time scale of secondary time and frequency standards at Golitzino and Irkutsk in Russia and relative to NIST, PTB and USNO using GLONASS and GPS time transfer links gives stability estimations which are close to that based on H-maser intercomparisons.
Russian national time scale long-term stability
NASA Technical Reports Server (NTRS)
Alshina, A. P.; Gaigerov, B. A.; Koshelyaevsky, N. B.; Pushkin, S. B.
1994-01-01
The Institute of Metrology for Time and Space NPO 'VNIIFTRI' generates the National Time Scale (NTS) of Russia -- one of the most stable time scales in the world. Its striking feature is that it is based on a free ensemble of H-masers only. During last two years the estimations of NTS longterm stability based only on H-maser intercomparison data gives a flicker floor of about (2 to 3) x 10(exp -15) for averaging times from 1 day to 1 month. Perhaps the most significant feature for a time laboratory is an extremely low possible frequency drift -- it is too difficult to estimate it reliably. The other estimations, free from possible inside the ensemble correlation phenomena, are available based on the time comparison of NTS relative to the stable enough time scale of outer laboratories. The data on NTS comparison relative to the time scale of secondary time and frequency standards at Golitzino and Irkutsk in Russia and relative to NIST, PTB and USNO using GLONASS and GPS time transfer links gives stability estimations which are close to that based on H-maser intercomparisons.
Impact of aggregation on scaling behavior of Internet backbone traffic
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Li; Ribeiro, Vinay J.; Moon, Sue B.; Diot, Christophe
2002-07-01
We study the impact of aggregation on the scaling behavior of Internet backbone tra ffic, based on traces collected from OC3 and OC12 links in a tier-1 ISP. We make two striking observations regarding the sub-second small time scaling behaviors of Internet backbone traffic: 1) for a majority of these traces, the Hurst parameters at small time scales (1ms - 100ms) are fairly close to 0.5. Hence the traffic at these time scales are nearly uncorrelated; 2) the scaling behaviors at small time scales are link-dependent, and stay fairly invariant over changing utilization and time. To understand the scaling behavior of network traffic, we develop analytical models and employ them to demonstrate how traffic composition -- aggregation of traffic with different characteristics -- affects the small-time scalings of network traffic. The degree of aggregation and burst correlation structure are two major factors in traffic composition. Our trace-based data analysis confirms this. Furthermore, we discover that traffic composition on a backbone link stays fairly consistent over time and changing utilization, which we believe is the cause for the invariant small-time scalings we observe in the traces.
A Pulsar Time Scale Based on Parkes Observations in 1995-2010
NASA Astrophysics Data System (ADS)
Rodin, A. E.; Fedorova, V. A.
2018-06-01
Timing of highly stable millisecond pulsars provides the possibility of independently verifying terrestrial time scales on intervals longer than a year. An ensemble pulsar time scale is constructed based on pulsar timing data obtained on the 64-m Parkes telescope (Australia) in 1995-2010. Optimal Wiener filters were applied to enhance the accuracy of the ensemble time scale. The run of the time-scale difference PTens-TT(BIPM2011) does not exceed 0.8 ± 0.4 μs over the entire studied time interval. The fractional instability of the difference PTens-TT(BIPM2011) over 15 years is σ z = (0.6 ± 1.6) × 10-15, which corresponds to an upper limit for the energy density of the gravitational-wave background Ω g h 2 10-10 and variations in the gravitational potential 10-15 Hz at the frequency 2 × 10-9 Hz.
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Kiyono, Ken; Tsujimoto, Yutaka
2016-07-01
We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.
Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives
NASA Astrophysics Data System (ADS)
Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark
2011-06-01
Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle
2016-09-01
Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations
NASA Astrophysics Data System (ADS)
Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean
2017-10-01
Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.
Chemistry resolved kinetic flow modeling of TATB based explosives
NASA Astrophysics Data System (ADS)
Vitello, Peter; Fried, Laurence E.; William, Howard; Levesque, George; Souers, P. Clark
2012-03-01
Detonation waves in insensitive, TATB-based explosives are believed to have multiple time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. We term our model chemistry resolved kinetic flow, since CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculates EOS values based on the concentrations. We present here two variants of our new rate model and comparison with hot, ambient, and cold experimental data for PBX 9502.
Physics in space-time with scale-dependent metrics
NASA Astrophysics Data System (ADS)
Balankin, Alexander S.
2013-10-01
We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.
Delay induced high order locking effects in semiconductor lasers
NASA Astrophysics Data System (ADS)
Kelleher, B.; Wishon, M. J.; Locquet, A.; Goulding, D.; Tykalewicz, B.; Huyet, G.; Viktorov, E. A.
2017-11-01
Multiple time scales appear in many nonlinear dynamical systems. Semiconductor lasers, in particular, provide a fertile testing ground for multiple time scale dynamics. For solitary semiconductor lasers, the two fundamental time scales are the cavity repetition rate and the relaxation oscillation frequency which is a characteristic of the field-matter interaction in the cavity. Typically, these two time scales are of very different orders, and mutual resonances do not occur. Optical feedback endows the system with a third time scale: the external cavity repetition rate. This is typically much longer than the device cavity repetition rate and suggests the possibility of resonances with the relaxation oscillations. We show that for lasers with highly damped relaxation oscillations, such resonances can be obtained and lead to spontaneous mode-locking. Two different laser types-—a quantum dot based device and a quantum well based device—are analysed experimentally yielding qualitatively identical dynamics. A rate equation model is also employed showing an excellent agreement with the experimental results.
Delay induced high order locking effects in semiconductor lasers.
Kelleher, B; Wishon, M J; Locquet, A; Goulding, D; Tykalewicz, B; Huyet, G; Viktorov, E A
2017-11-01
Multiple time scales appear in many nonlinear dynamical systems. Semiconductor lasers, in particular, provide a fertile testing ground for multiple time scale dynamics. For solitary semiconductor lasers, the two fundamental time scales are the cavity repetition rate and the relaxation oscillation frequency which is a characteristic of the field-matter interaction in the cavity. Typically, these two time scales are of very different orders, and mutual resonances do not occur. Optical feedback endows the system with a third time scale: the external cavity repetition rate. This is typically much longer than the device cavity repetition rate and suggests the possibility of resonances with the relaxation oscillations. We show that for lasers with highly damped relaxation oscillations, such resonances can be obtained and lead to spontaneous mode-locking. Two different laser types--a quantum dot based device and a quantum well based device-are analysed experimentally yielding qualitatively identical dynamics. A rate equation model is also employed showing an excellent agreement with the experimental results.
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
Advances in time-scale algorithms
NASA Technical Reports Server (NTRS)
Stein, S. R.
1993-01-01
The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.
Li, Shao-Peng; Cadotte, Marc W; Meiners, Scott J; Pu, Zhichao; Fukami, Tadashi; Jiang, Lin
2016-09-01
Whether plant communities in a given region converge towards a particular stable state during succession has long been debated, but rarely tested at a sufficiently long time scale. By analysing a 50-year continuous study of post-agricultural secondary succession in New Jersey, USA, we show that the extent of community convergence varies with the spatial scale and species abundance classes. At the larger field scale, abundance-based dissimilarities among communities decreased over time, indicating convergence of dominant species, whereas incidence-based dissimilarities showed little temporal tend, indicating no sign of convergence. In contrast, plots within each field diverged in both species composition and abundance. Abundance-based successional rates decreased over time, whereas rare species and herbaceous plants showed little change in temporal turnover rates. Initial abandonment conditions only influenced community structure early in succession. Overall, our findings provide strong evidence for scale and abundance dependence of stochastic and deterministic processes over old-field succession. © 2016 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo
2016-07-01
The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.
Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitello, P A; Fried, L E; Howard, W M
2011-07-21
Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less
Ground-based demonstration of the European Laser Timing (ELT) experiment.
Schreiber, Karl Ulrich; Prochazka, Ivan; Lauber, Pierre; Hugentobler, Urs; Schäfer, Wolfgang; Cacciapuoti, Luigi; Nasca, Rosario
2010-03-01
The development of techniques for the comparison of distant clocks and for the distribution of stable and accurate time scales has important applications in metrology and fundamental physics research. Additionally, the rapid progress of frequency standards in the optical domain is presently demanding additional efforts for improving the performances of existing time and frequency transfer links. Present clock comparison systems in the microwave domain are based on GPS and two-way satellite time and frequency transfer (TWSTFT). European Laser Timing (ELT) is an optical link presently under study in the frame of the ESA mission Atomic Clock Ensemble in Space (ACES). The on-board hardware for ELT consists of a corner cube retro-reflector (CCR), a single-photon avalanche diode (SPAD), and an event timer board connected to the ACES time scale. Light pulses fired toward ACES by a laser ranging station will be detected by the SPAD diode and time tagged in the ACES time scale. At the same time, the CCR will re-direct the laser pulse toward the ground station providing precise ranging information. We have carried out a ground-based feasibility study at the Geodetic Observatory Wettzell. By using ordinary satellites with laser reflectors and providing a second independent detection port and laser pulse timing unit with an independent time scale, it is possible to evaluate many aspects of the proposed time transfer link before the ACES launch.
Dosage-based parameters for characterization of puff dispersion results.
Berbekar, Eva; Harms, Frank; Leitl, Bernd
2015-01-01
A set of parameters is introduced to characterize the dispersion of puff releases based on the measured dosage. These parameters are the dosage, peak concentration, arrival time, peak time, leaving time, ascent time, descent time and duration. Dimensionless numbers for the scaling of the parameters are derived from dimensional analysis. The dimensionless numbers are tested and confirmed based on a statistically representative wind tunnel dataset. The measurements were carried out in a 1:300 scale model of the Central Business District in Oklahoma City. Additionally, the effect of the release duration on the puff parameters is investigated. Copyright © 2014 Elsevier B.V. All rights reserved.
Multi-Scale Scattering Transform in Music Similarity Measuring
NASA Astrophysics Data System (ADS)
Wang, Ruobai
Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.
EMBAYMENT CHARACTERISTIC TIME AND BIOLOGY VIA TIDAL PRISM MODEL
Transport time scales in water bodies are classically based on their physical and chemical aspects rather than on their ecological and biological character. The direct connection between a physical time scale and ecological effects has to be investigated in order to quantitativel...
Portable real-time fluorescence cytometry of microscale cell culture analog devices
NASA Astrophysics Data System (ADS)
Kim, Donghyun; Tatosian, Daniel A.; Shuler, Michael L.
2006-02-01
A portable fluorescence cytometric system that provides a modular platform for quantitative real-time image measurements has been used to explore the applicability to investigating cellular events on multiple time scales. For a short time scale, we investigated the real-time dynamics of uptake of daunorubicin, a chemotherapeutic agent, in cultured mouse L-cells in a micro cell culture analog compartment using the fluorescent cytometric system. The green fluorescent protein (GFP) expression to monitor induction of pre-specified genes, which occurs on a much longer time scale, has also been measured. Here GFP fluorescence from a doxycycline inducible promoter in a mouse L-cell line was determined. Additionally, a system based on inexpensive LEDs showed performance comparable to a broadband light source based system and reduced photobleaching compared to microscopic examination.
Li-Yorke Chaos in Hybrid Systems on a Time Scale
NASA Astrophysics Data System (ADS)
Akhmet, Marat; Fen, Mehmet Onur
2015-12-01
By using the reduction technique to impulsive differential equations [Akhmet & Turan, 2006], we rigorously prove the presence of chaos in dynamic equations on time scales (DETS). The results of the present study are based on the Li-Yorke definition of chaos. This is the first time in the literature that chaos is obtained for DETS. An illustrative example is presented by means of a Duffing equation on a time scale.
Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proposal
1991-11-01
OF THE PROBLEM AND CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0... DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 9 Figure 5: The flow of data in a DNA analysis system based on an...logarithmic scale and a linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
Once upon a (slow) time in the land of recurrent neuronal networks….
Huang, Chengcheng; Doiron, Brent
2017-10-01
The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences. Copyright © 2017 Elsevier Ltd. All rights reserved.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Magnan, Morris A; Maklebust, Joann
2008-01-01
To evaluate the effect of Web-based Braden Scale training on the reliability and precision of pressure ulcer risk assessments made by registered nurses (RN) working in acute care settings. Pretest-posttest, 2-group, quasi-experimental design. Five hundred Braden Scale risk assessments were made on 102 acute care patients deemed to be at various levels of risk for pressure ulceration. Assessments were made by RNs working in acute care hospitals at 3 different medical centers where the Braden Scale was in regular daily use (2 medical centers) or new to the setting (1 medical center). The Braden Scale for Predicting Pressure Sore Risk was used to guide pressure ulcer risk assessments. A Web-based version of the Detroit Medical Center Braden Scale Computerized Training Module was used to teach nurses correct use of the Braden Scale and selection of risk-based pressure ulcer prevention interventions. In the aggregate, RN generated reliable Braden Scale pressure ulcer risk assessments 65% of the time after training. The effect of Web-based Braden Scale training on reliability and precision of assessments varied according to familiarity with the scale. With training, new users of the scale made reliable assessments 84% of the time and significantly improved precision of their assessments. The reliability and precision of Braden Scale risk assessments made by its regular users was unaffected by training. Technology-assisted Braden Scale training improved both reliability and precision of risk assessments made by new users of the scale, but had virtually no effect on the reliability or precision of risk assessments made by regular users of the instrument. Further research is needed to determine best approaches for improving reliability and precision of Braden Scale assessments made by its regular users.
Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence
NASA Technical Reports Server (NTRS)
Zhou, YE; Matthaeus, William H.
1990-01-01
A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
NASA Astrophysics Data System (ADS)
Wohlmuth, Johannes; Andersen, Jørgen Vitting
2006-05-01
We use agent-based models to study the competition among investors who use trading strategies with different amount of information and with different time scales. We find that mixing agents that trade on the same time scale but with different amount of information has a stabilizing impact on the large and extreme fluctuations of the market. Traders with the most information are found to be more likely to arbitrage traders who use less information in the decision making. On the other hand, introducing investors who act on two different time scales has a destabilizing effect on the large and extreme price movements, increasing the volatility of the market. Closeness in time scale used in the decision making is found to facilitate the creation of local trends. The larger the overlap in commonly shared information the more the traders in a mixed system with different time scales are found to profit from the presence of traders acting at another time scale than themselves.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
How High Frequency Trading Affects a Market Index
Kenett, Dror Y.; Ben-Jacob, Eshel; Stanley, H. Eugene; gur-Gershgoren, Gitit
2013-01-01
The relationship between a market index and its constituent stocks is complicated. While an index is a weighted average of its constituent stocks, when the investigated time scale is one day or longer the index has been found to have a stronger effect on the stocks than vice versa. We explore how this interaction changes in short time scales using high frequency data. Using a correlation-based analysis approach, we find that in short time scales stocks have a stronger influence on the index. These findings have implications for high frequency trading and suggest that the price of an index should be published on shorter time scales, as close as possible to those of the actual transaction time scale. PMID:23817553
A wavelet based approach to measure and manage contagion at different time scales
NASA Astrophysics Data System (ADS)
Berger, Theo
2015-10-01
We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.
Vandergoot, C.S.; Bur, M.T.; Powell, K.A.
2008-01-01
Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.
Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons
Buhusi, Catalin V.; Oprisan, Sorinel A.
2013-01-01
In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297
Multiscale structure of time series revealed by the monotony spectrum.
Vamoş, Călin
2017-03-01
Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.
Evaluating scale-up rules of a high-shear wet granulation process.
Tao, Jing; Pandey, Preetanshu; Bindra, Dilbir S; Gao, Julia Z; Narang, Ajit S
2015-07-01
This work aimed to evaluate the commonly used scale-up rules for high-shear wet granulation process using a microcrystalline cellulose-lactose-based low drug loading formulation. Granule properties such as particle size, porosity, flow, and tabletability, and tablet dissolution were compared across scales using scale-up rules based on different impeller speed calculations or extended wet massing time. Constant tip speed rule was observed to produce slightly less granulated material at the larger scales. Longer wet massing time can be used to compensate for the lower shear experienced by the granules at the larger scales. Constant Froude number and constant empirical stress rules yielded granules that were more comparable across different scales in terms of compaction performance and tablet dissolution. Granule porosity was shown to correlate well with blend tabletability and tablet dissolution, indicating the importance of monitoring granule densification (porosity) during scale-up. It was shown that different routes can be chosen during scale-up to achieve comparable granule growth and densification by altering one of the three parameters: water amount, impeller speed, and wet massing time. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
Estimation of Time Scales in Unsteady Flows in a Turbomachinery Rig
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.
2004-01-01
Time scales in turbulent and transitional flow provide a link between experimental data and modeling, both in terms of physical content and for quantitative assessment. The problem of interest here is the definition of time scales in an unsteady flow. Using representative samples of data from GEAE low pressure turbine experiment in low speed research turbine facility with wake-induced transition, we document several methods to extract dominant frequencies, and compare the results. We show that conventional methods of time scale evaluation (based on autocorrelation functions and on Fourier spectra) and wavelet-based methods provide similar information when applied to stationary signals. We also show the greater flexibility of the wavelet-based methods when dealing with intermittent or strongly modulated data, as are encountered in transitioning boundary layers and in flows with unsteady forcing associated with wake passing. We define phase-averaged dominant frequencies that characterize the turbulence associated with freestream conditions and with the passing wakes downstream of a rotor. The relevance of these results for modeling is discussed in the paper.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, S. Y.
Presentation on real-time imaging of plant cell wall structure at nanometer scale. Objectives are to develop tools to measure biomass at the nanometer scale; elucidate the molecular bases of biomass deconstruction; and identify factors that affect the conversion efficiency of biomass-to-biofuels.
Kevane, C J
1961-02-24
A cosmological model based on a gravitational plasma of matter and antimatter is discussed. The antigravitational interaction of matter and antimatter leads to segregation and an expansion of the plasma universe. The expansion time scale is controlled by the aggregation time scale.
Optimization and large scale computation of an entropy-based moment closure
NASA Astrophysics Data System (ADS)
Kristopher Garrett, C.; Hauck, Cory; Hill, Judith
2015-12-01
We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.
Optimization and large scale computation of an entropy-based moment closure
Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher
2015-09-10
We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
Trend Switching Processes in Financial Markets
NASA Astrophysics Data System (ADS)
Preis, Tobias; Stanley, H. Eugene
For an intriguing variety of switching processes in nature, the underlying complex system abruptly changes at a specific point from one state to another in a highly discontinuous fashion. Financial market fluctuations are characterized by many abrupt switchings creating increasing trends ("bubble formation") and decreasing trends ("bubble collapse"), on time scales ranging from macroscopic bubbles persisting for hundreds of days to microscopic bubbles persisting only for very short time scales. Our analysis is based on a German DAX Future data base containing 13,991,275 transactions recorded with a time resolution of 10- 2 s. For a parallel analysis, we use a data base of all S&P500 stocks providing 2,592,531 daily closing prices. We ask whether these ubiquitous switching processes have quantifiable features independent of the time horizon studied. We find striking scale-free behavior of the volatility after each switching occurs. We interpret our findings as being consistent with time-dependent collective behavior of financial market participants. We test the possible universality of our result by performing a parallel analysis of fluctuations in transaction volume and time intervals between trades. We show that these financial market switching processes have features similar to those present in phase transitions. We find that the well-known catastrophic bubbles that occur on large time scales - such as the most recent financial crisis - are no outliers but in fact single dramatic representatives caused by the formation of upward and downward trends on time scales varying over nine orders of magnitude from the very large down to the very small.
Real-time quantum cascade laser-based infrared microspectroscopy in-vivo
NASA Astrophysics Data System (ADS)
Kröger-Lui, N.; Haase, K.; Pucci, A.; Schönhals, A.; Petrich, W.
2016-03-01
Infrared microscopy can be performed to observe dynamic processes on a microscopic scale. Fourier-transform infrared spectroscopy-based microscopes are bound to limitations regarding time resolution, which hampers their potential for imaging fast moving systems. In this manuscript we present a quantum cascade laser-based infrared microscope which overcomes these limitations and readily achieves standard video frame rates. The capabilities of our setup are demonstrated by observing dynamical processes at their specific time scales: fermentation, slow moving Amoeba Proteus and fast moving Caenorhabditis elegans. Mid-infrared sampling rates between 30 min and 20 ms are demonstrated.
Wadud, Zahid; Hussain, Sajjad; Javaid, Nadeem; Bouk, Safdar Hussain; Alrajeh, Nabil; Alabed, Mohamad Souheil; Guizani, Nadra
2017-09-30
Industrial Underwater Acoustic Sensor Networks (IUASNs) come with intrinsic challenges like long propagation delay, small bandwidth, large energy consumption, three-dimensional deployment, and high deployment and battery replacement cost. Any routing strategy proposed for IUASN must take into account these constraints. The vector based forwarding schemes in literature forward data packets to sink using holding time and location information of the sender, forwarder, and sink nodes. Holding time suppresses data broadcasts; however, it fails to keep energy and delay fairness in the network. To achieve this, we propose an Energy Scaled and Expanded Vector-Based Forwarding (ESEVBF) scheme. ESEVBF uses the residual energy of the node to scale and vector pipeline distance ratio to expand the holding time. Resulting scaled and expanded holding time of all forwarding nodes has a significant difference to avoid multiple forwarding, which reduces energy consumption and energy balancing in the network. If a node has a minimum holding time among its neighbors, it shrinks the holding time and quickly forwards the data packets upstream. The performance of ESEVBF is analyzed through in network scenario with and without node mobility to ensure its effectiveness. Simulation results show that ESEVBF has low energy consumption, reduces forwarded data copies, and less end-to-end delay.
Scale-Up of Lubricant Mixing Process by Using V-Type Blender Based on Discrete Element Method.
Horibe, Masashi; Sonoda, Ryoichi; Watano, Satoru
2018-01-01
A method for scale-up of a lubricant mixing process in a V-type blender was proposed. Magnesium stearate was used for the lubricant, and the lubricant mixing experiment was conducted using three scales of V-type blenders (1.45, 21 and 130 L) under the same fill level and Froude (Fr) number. However, the properties of lubricated mixtures and tablets could not correspond with the mixing time or the total revolution number. To find the optimum scale-up factor, discrete element method (DEM) simulations of three scales of V-type blender mixing were conducted, and the total travel distance of particles under the different scales was calculated. The properties of the lubricated mixture and tablets obtained from the scale-up experiment were well correlated with the mixing time determined by the total travel distance. It was found that a scale-up simulation based on the travel distance of particles is valid for the lubricant mixing scale-up processes.
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Comparison of detrending methods for fluctuation analysis in hydrology
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Chen, Yongqin David
2011-03-01
SummaryTrends within a hydrologic time series can significantly influence the scaling results of fluctuation analysis, such as rescaled range (RS) analysis and (multifractal) detrended fluctuation analysis (MF-DFA). Therefore, removal of trends is important in the study of scaling properties of the time series. In this study, three detrending methods, including adaptive detrending algorithm (ADA), Fourier-based method, and average removing technique, were evaluated by analyzing numerically generated series and observed streamflow series with obvious relative regular periodic trend. Results indicated that: (1) the Fourier-based detrending method and ADA were similar in detrending practices, and given proper parameters, these two methods can produce similarly satisfactory results; (2) detrended series by Fourier-based detrending method and ADA lose the fluctuation information at larger time scales, and the location of crossover points is heavily impacted by the chosen parameters of these two methods; and (3) the average removing method has an advantage over the other two methods, i.e., the fluctuation information at larger time scales is kept well-an indication of relatively reliable performance in detrending. In addition, the average removing method performed reasonably well in detrending a time series with regular periods or trends. In this sense, the average removing method should be preferred in the study of scaling properties of the hydrometeorolgical series with relative regular periodic trend using MF-DFA.
A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nomura, K; Seymour, R; Wang, W
2009-02-17
A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
NASA Astrophysics Data System (ADS)
Nogueira, Miguel
2018-02-01
Spectral analysis of global-mean precipitation, P, evaporation, E, precipitable water, W, and surface temperature, Ts, revealed significant variability from sub-daily to multi-decadal time-scales, superposed on high-amplitude diurnal and yearly peaks. Two distinct regimes emerged from a transition in the spectral exponents, β. The weather regime covering time-scales < 10 days with β ≥ 1; and the macroweather regime extending from a few months to a few decades with 0 <β <1. Additionally, the spectra showed a generally good statistical agreement amongst several different model- and satellite-based datasets. Detrended cross-correlation analysis (DCCA) revealed three important results which are robust across all datasets: (1) Clausius-Clapeyron (C-C) relationship is the dominant mechanism of W non-periodic variability at multi-year time-scales; (2) C-C is not the dominant control of W, P or E non-periodic variability at time-scales below about 6 months, where the weather regime is approached and other mechanisms become important; (3) C-C is not a dominant control for P or E over land throughout the entire time-scale range considered. Furthermore, it is suggested that the atmosphere and oceans start to act as a single coupled system at time-scales > 1-2 years, while at time-scales < 6 months they are not the dominant drivers of each other. For global-ocean and full-globe averages, ρDCCA showed large spread of the C-C importance for P and E variability amongst different datasets at multi-year time-scales, ranging from negligible (< 0.3) to high ( 0.6-0.8) values. Hence, state-of-the-art climate datasets have significant uncertainties in the representation of macroweather precipitation and evaporation variability and its governing mechanisms.
Divisions of Geologic Time - Major Chronostratigraphic and Geochronologic Units
,
2007-01-01
Introduction Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and calibrated in years (Harland and others, 1982). Over the years, the development of new dating methods and refinement of previous ones have stimulated revisions to geologic time scales. Since the mid-1990s, geologists from the U.S. Geological Survey (USGS), State geological surveys, academia, and other organizations have sought a consistent time scale to be used in communicating ages of geologic units in the United States. Many international debates have occurred over names and boundaries of units, and various time scales have been used by the geoscience community.
Liu, Huiyu; Zhang, Mingyang; Lin, Zhenshan
2017-10-05
Climate changes are considered to significantly impact net primary productivity (NPP). However, there are few studies on how climate changes at multiple time scales impact NPP. With MODIS NPP product and station-based observations of sunshine duration, annual average temperature and annual precipitation, impacts of climate changes at different time scales on annual NPP, have been studied with EEMD (ensemble empirical mode decomposition) method in the Karst area of northwest Guangxi, China, during 2000-2013. Moreover, with partial least squares regression (PLSR) model, the relative importance of climatic variables for annual NPP has been explored. The results show that (1) only at quasi 3-year time scale do sunshine duration and temperature have significantly positive relations with NPP. (2) Annual precipitation has no significant relation to NPP by direct comparison, but significantly positive relation at 5-year time scale, which is because 5-year time scale is not the dominant scale of precipitation; (3) the changes of NPP may be dominated by inter-annual variabilities. (4) Multiple time scales analysis will greatly improve the performance of PLSR model for estimating NPP. The variable importance in projection (VIP) scores of sunshine duration and temperature at quasi 3-year time scale, and precipitation at quasi 5-year time scale are greater than 0.8, indicating important for NPP during 2000-2013. However, sunshine duration and temperature at quasi 3-year time scale are much more important. Our results underscore the importance of multiple time scales analysis for revealing the relations of NPP to changing climate.
NASA Astrophysics Data System (ADS)
Wu, Yue; Shang, Pengjian; Li, Yilong
2018-03-01
A modified multiscale sample entropy measure based on symbolic representation and similarity (MSEBSS) is proposed in this paper to research the complexity of stock markets. The modified algorithm reduces the probability of inducing undefined entropies and is confirmed to be robust to strong noise. Considering the validity and accuracy, MSEBSS is more reliable than Multiscale entropy (MSE) for time series mingled with much noise like financial time series. We apply MSEBSS to financial markets and results show American stock markets have the lowest complexity compared with European and Asian markets. There are exceptions to the regularity that stock markets show a decreasing complexity over the time scale, indicating a periodicity at certain scales. Based on MSEBSS, we introduce the modified multiscale cross-sample entropy measure based on symbolic representation and similarity (MCSEBSS) to consider the degree of the asynchrony between distinct time series. Stock markets from the same area have higher synchrony than those from different areas. And for stock markets having relative high synchrony, the entropy values will decrease with the increasing scale factor. While for stock markets having high asynchrony, the entropy values will not decrease with the increasing scale factor sometimes they tend to increase. So both MSEBSS and MCSEBSS are able to distinguish stock markets of different areas, and they are more helpful if used together for studying other features of financial time series.
Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild
Broell, Franziska; Taggart, Christopher T.
2015-01-01
This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...
2017-02-16
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
NASA Astrophysics Data System (ADS)
Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.
2017-12-01
Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.
Yasuhara, Moriaki; Doi, Hideyuki; Wei, Chih-Lin; Danovaro, Roberto; Myhre, Sarah E
2016-05-19
The link between biodiversity and ecosystem functioning (BEF) over long temporal scales is poorly understood. Here, we investigate biological monitoring and palaeoecological records on decadal, centennial and millennial time scales from a BEF framework by using deep sea, soft-sediment environments as a test bed. Results generally show positive BEF relationships, in agreement with BEF studies based on present-day spatial analyses and short-term manipulative experiments. However, the deep-sea BEF relationship is much noisier across longer time scales compared with modern observational studies. We also demonstrate with palaeoecological time-series data that a larger species pool does not enhance ecosystem stability through time, whereas higher abundance as an indicator of higher ecosystem functioning may enhance ecosystem stability. These results suggest that BEF relationships are potentially time scale-dependent. Environmental impacts on biodiversity and ecosystem functioning may be much stronger than biodiversity impacts on ecosystem functioning at long, decadal-millennial, time scales. Longer time scale perspectives, including palaeoecological and ecosystem monitoring data, are critical for predicting future BEF relationships on a rapidly changing planet. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Imai, M.; Kouyama, T.; Takahashi, Y.; Watanabe, S.; Yamazaki, A.; Yamada, M.; Nakamura, M.; Satoh, T.; Imamura, T.; Nakaoka, T.; Kawabata, M.; Yamanaka, M.; Kawabata, K. S.
2017-12-01
Venus has a global cloud layer, and the atmosphere rotates with the speed over 100 m/s. The scattering of solar radiance and absorber in clouds cause the strong dark and bright contrast in 365 nm unknown absorption bands. The Japanese Venus orbiter AKATSUKI and the onboard instrument UVI capture 100 km mesoscale cloud features over the entire visible dayside area. In contrast, planetary-scale features are observed when the orbiter is at the moderate distance from Venus and when the Sun-Venus-orbiter phase angle is smaller than 45 deg. Cloud top wind velocity was measured with the mesoscale cloud tracking technique, however, observations of the propagation velocity and its variation of the planetary-scale feature are not well conducted because of the limitation of the observable area. The purpose of the study is measuring the effect of wind acceleration by planetary-scale waves. Each cloud motion can be represented as the wind and phase velocity of the planetary-scale waves, respectively. We conducted simultaneous observations of the zonal motion of both mesoscale and planetary-scale feature using UVI/AKATSUKI and ground-based Pirka and Kanata telescopes in Japan. Our previous ground-based observation revealed the periodicity change of planetary-scale waves with a time scale of a couple of months. For the initial analysis of UVI images, we used the time-consecutive images taken in the orbit #32. During this orbit (from Nov. 13 to 20, 2016), 7 images were obtained with 2 hr time-interval in a day whose spatial resolution ranged from 10-35 km. To investigate the typical mesoscale cloud motion, the Gaussian-filters with sigma = 3 deg. were used to smooth geometrically mapped images with 0.25 deg. resolution. Then the amount of zonal shift for each 5 deg. latitudinal bands between the pairs of two time-consecutive images were estimated by searching the 2D cross-correlation maximum. The final wind velocity (or rotation period) for mesoscale features were determined with a small error about +/- 0.1-day period in equatorial region (Figure 2). The same method will be applied for planetary-scale features captured by UVI, and ground-based observations compensate the discontinuity in UVI data. At the presentation, the variability in winds and wave propagation velocity with the time scale of a couple of months will be shown.
Real-time gray-scale photolithography for fabrication of continuous microstructure
NASA Astrophysics Data System (ADS)
Peng, Qinjun; Guo, Yongkang; Liu, Shijie; Cui, Zheng
2002-10-01
A novel real-time gray-scale photolithography technique for the fabrication of continuous microstructures that uses a LCD panel as a real-time gray-scale mask is presented. The principle of design of the technique is explained, and computer simulation results based on partially coherent imaging theory are given for the patterning of a microlens array and a zigzag grating. An experiment is set up, and a microlens array and a zigzag grating on panchromatic silver halide sensitized gelatin with trypsinase etching are obtained.
Groundwater similarity across a watershed derived from time-warped and flow-corrected time series
NASA Astrophysics Data System (ADS)
Rinderer, M.; McGlynn, B. L.; van Meerveld, H. J.
2017-05-01
Information about catchment-scale groundwater dynamics is necessary to understand how catchments store and release water and why water quantity and quality varies in streams. However, groundwater level monitoring is often restricted to a limited number of sites. Knowledge of the factors that determine similarity between monitoring sites can be used to predict catchment-scale groundwater storage and connectivity of different runoff source areas. We used distance-based and correlation-based similarity measures to quantify the spatial and temporal differences in shallow groundwater similarity for 51 monitoring sites in a Swiss prealpine catchment. The 41 months long time series were preprocessed using Dynamic Time-Warping and a Flow-corrected Time Transformation to account for small timing differences and bias toward low-flow periods. The mean distance-based groundwater similarity was correlated to topographic indices, such as upslope contributing area, topographic wetness index, and local slope. Correlation-based similarity was less related to landscape position but instead revealed differences between seasons. Analysis of variance and partial Mantel tests showed that landscape position, represented by the topographic wetness index, explained 52% of the variability in mean distance-based groundwater similarity, while spatial distance, represented by the Euclidean distance, explained only 5%. The variability in distance-based similarity and correlation-based similarity between groundwater and streamflow time series was significantly larger for midslope locations than for other landscape positions. This suggests that groundwater dynamics at these midslope sites, which are important to understand runoff source areas and hydrological connectivity at the catchment scale, are most difficult to predict.
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
The detection of local irreversibility in time series based on segmentation
NASA Astrophysics Data System (ADS)
Teng, Yue; Shang, Pengjian
2018-06-01
We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.
NASA Astrophysics Data System (ADS)
Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.
2003-10-01
Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
Multi-Spatiotemporal Patterns of Residential Burglary Crimes in Chicago: 2006-2016
NASA Astrophysics Data System (ADS)
Luo, J.
2017-10-01
This research attempts to explore the patterns of burglary crimes at multi-spatiotemporal scales in Chicago between 2006 and 2016. Two spatial scales are investigated that are census block and police beat area. At each spatial scale, three temporal scales are integrated to make spatiotemporal slices: hourly scale with two-hour time step from 12:00am to the end of the day; daily scale with one-day step from Sunday to Saturday within a week; monthly scale with one-month step from January to December. A total of six types of spatiotemporal slices will be created as the base for the analysis. Burglary crimes are spatiotemporally aggregated to spatiotemporal slices based on where and when they occurred. For each type of spatiotemporal slices with burglary occurrences integrated, spatiotemporal neighborhood will be defined and managed in a spatiotemporal matrix. Hot-spot analysis will identify spatiotemporal clusters of each type of spatiotemporal slices. Spatiotemporal trend analysis is conducted to indicate how the clusters shift in space and time. The analysis results will provide helpful information for better target policing and crime prevention policy such as police patrol scheduling regarding times and places covered.
Liquidity spillover in international stock markets through distinct time scales.
Righi, Marcelo Brutti; Vieira, Kelmara Mendes
2014-01-01
This paper identifies liquidity spillovers through different time scales based on a wavelet multiscaling method. We decompose daily data from U.S., British, Brazilian and Hong Kong stock markets indices in order to calculate the scale correlation between their illiquidities. The sample is divided in order to consider non-crisis, sub-prime crisis and Eurozone crisis. We find that there are changes in correlations of distinct scales and different periods. Association in finest scales is smaller than in coarse scales. There is a rise on associations in periods of crisis. In frequencies, there is predominance for significant distinctions involving the coarsest scale, while for crises periods there is predominance for distinctions on the finest scale.
Liquidity Spillover in International Stock Markets through Distinct Time Scales
Righi, Marcelo Brutti; Vieira, Kelmara Mendes
2014-01-01
This paper identifies liquidity spillovers through different time scales based on a wavelet multiscaling method. We decompose daily data from U.S., British, Brazilian and Hong Kong stock markets indices in order to calculate the scale correlation between their illiquidities. The sample is divided in order to consider non-crisis, sub-prime crisis and Eurozone crisis. We find that there are changes in correlations of distinct scales and different periods. Association in finest scales is smaller than in coarse scales. There is a rise on associations in periods of crisis. In frequencies, there is predominance for significant distinctions involving the coarsest scale, while for crises periods there is predominance for distinctions on the finest scale. PMID:24465918
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-01-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Transition Manifolds of Complex Metastable Systems
NASA Astrophysics Data System (ADS)
Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof
2018-04-01
We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model
NASA Astrophysics Data System (ADS)
Lvov, Yuri V.; Onorato, Miguel
2018-04-01
We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.
Biointerface dynamics--Multi scale modeling considerations.
Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko
2015-08-01
Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.
Anomalous diffusion for bed load transport with a physically-based model
NASA Astrophysics Data System (ADS)
Fan, N.; Singh, A.; Foufoula-Georgiou, E.; Wu, B.
2013-12-01
Diffusion of bed load particles shows both normal and anomalous behavior for different spatial-temporal scales. Understanding and quantifying these different types of diffusion is important not only for the development of theoretical models of particle transport but also for practical purposes, e.g., river management. Here we extend a recently proposed physically-based model of particle transport by Fan et al. [2013] to further develop an Episodic Langevin equation (ELE) for individual particle motion which reproduces the episodic movement (start and stop) of sediment particles. Using the proposed ELE we simulate particle movements for a large number of uniform size particles, incorporating different probability distribution functions (PDFs) of particle waiting time. For exponential PDFs of waiting times, particles reveal ballistic motion in short time scales and turn to normal diffusion at long time scales. The PDF of simulated particle travel distances also shows a change in its shape from exponential to Gamma to Gaussian with a change in timescale implying different diffusion scaling regimes. For power-law PDF (with power - μ) of waiting times, the asymptotic behavior of particles at long time scales reveals both super-diffusion and sub-diffusion, however, only very heavy tailed waiting times (i.e. 1.0 < μ < 1.5) could result in sub-diffusion. We suggest that the contrast between our results and previous studies (for e.g., studies based on fractional advection-diffusion models of thin/heavy tailed particle hops and waiting times) results could be due the assumption in those studies that the hops are achieved instantaneously, but in reality, particles achieve their hops within finite times (as we simulate here) instead of instantaneously, even if the hop times are much shorter than waiting times. In summary, this study stresses on the need to rethink the alternative models to the previous models, such as, fractional advection-diffusion equations, for studying the anomalous diffusion of bed load particles. The implications of these results for modeling sediment transport are discussed.
The role of topography on catchment‐scale water residence time
McGuire, K.J.; McDonnell, Jeffery J.; Weiler, M.; Kendall, C.; McGlynn, B.L.; Welker, J.M.; Seibert, J.
2005-01-01
The age, or residence time, of water is a fundamental descriptor of catchment hydrology, revealing information about the storage, flow pathways, and source of water in a single integrated measure. While there has been tremendous recent interest in residence time estimation to characterize watersheds, there are relatively few studies that have quantified residence time at the watershed scale, and fewer still that have extended those results beyond single catchments to larger landscape scales. We examined topographic controls on residence time for seven catchments (0.085–62.4 km2) that represent diverse geologic and geomorphic conditions in the western Cascade Mountains of Oregon. Our primary objective was to determine the dominant physical controls on catchment‐scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin. Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network. We found that base flow mean residence times for exponential distributions ranged from 0.8 to 3.3 years. Mean residence time showed no correlation to basin area (r2 < 0.01) but instead was correlated (r2 = 0.91) to catchment terrain indices representing the flow path distance and flow path gradient to the stream network. These results illustrate that landscape organization (i.e., topography) rather than basin area controls catchment‐scale transport. Results from this study may provide a framework for describing scale‐invariant transport across climatic and geologic conditions, whereby the internal form and structure of the basin defines the first‐order control on base flow residence time.
Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios
NASA Astrophysics Data System (ADS)
Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui
2018-01-01
The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.
The Development of Time-Based Prospective Memory in Childhood: The Role of Working Memory Updating
ERIC Educational Resources Information Center
Voigt, Babett; Mahy, Caitlin E. V.; Ellis, Judi; Schnitzspahn, Katharina; Krause, Ivonne; Altgassen, Mareike; Kliegel, Matthias
2014-01-01
This large-scale study examined the development of time-based prospective memory (PM) across childhood and the roles that working memory updating and time monitoring play in driving age effects in PM performance. One hundred and ninety-seven children aged 5 to 14 years completed a time-based PM task where working memory updating load was…
A picture for the coupling of unemployment and inflation
NASA Astrophysics Data System (ADS)
Safdari, H.; Hosseiny, A.; Vasheghani Farahani, S.; Jafari, G. R.
2016-02-01
The aim of this article is to illustrate the scaling features of two well heard characters in the media; unemployment and inflation. We carry out a scaling analysis on the coupling between unemployment and inflation. This work is based on the wavelet analysis as well as the detrended fluctuation analysis (DFA). Through our analysis we state that while unemployment is time scale invariant, inflation is bi-scale. We show that inflation possess a five year time scale where it experiences different behaviours before and after this scale period. This behaviour of inflation provides basis for the coupling to inherit the stated time interval. Although inflation is bi-scale, it is unemployment that shows a strong multifractality feature. Owing to the cross wavelet analysis we provide a picture that illustrates the dynamics of coupling between unemployment and inflation regarding intensity, direction, and scale. The fact of the matter is that the coupling between inflation and unemployment is not equal in one way compared to the opposite. Regarding the scaling; coupling exhibits different features in various scales. In a sense that although in one scale its correlation behaves in a positive/negative manner, at the same time it can be negative/positive for another scale.
Econophysics — complex correlations and trend switchings in financial time series
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
This article focuses on the analysis of financial time series and their correlations. A method is used for quantifying pattern based correlations of a time series. With this methodology, evidence is found that typical behavioral patterns of financial market participants manifest over short time scales, i.e., that reactions to given price patterns are not entirely random, but that similar price patterns also cause similar reactions. Based on the investigation of the complex correlations in financial time series, the question arises, which properties change when switching from a positive trend to a negative trend. An empirical quantification by rescaling provides the result that new price extrema coincide with a significant increase in transaction volume and a significant decrease in the length of corresponding time intervals between transactions. These findings are independent of the time scale over 9 orders of magnitude, and they exhibit characteristics which one can also find in other complex systems in nature (and in physical systems in particular). These properties are independent of the markets analyzed. Trends that exist only for a few seconds show the same characteristics as trends on time scales of several months. Thus, it is possible to study financial bubbles and their collapses in more detail, because trend switching processes occur with higher frequency on small time scales. In addition, a Monte Carlo based simulation of financial markets is analyzed and extended in order to reproduce empirical features and to gain insight into their causes. These causes include both financial market microstructure and the risk aversion of market participants.
Development of the Free Time Motivation Scale for Adolescents.
ERIC Educational Resources Information Center
Baldwin, Cheryl K.; Caldwell, Linda L.
2003-01-01
Developed a self-report measure of adolescent free time motivation based in self-determination theory, using data from 634 seventh graders. The scale measured five forms of motivation (amotivation, external, introjected, identified, and intrinsic motivation). Examination of each of the subscales indicated minimally acceptable levels of fit. The…
Fast Grid Frequency Support from Distributed Inverter-Based Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson F
This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.
A new time scale based k-epsilon model for near wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1992-01-01
A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.
New time scale based k-epsilon model for near-wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1993-01-01
A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.
Mazzoleni, S; Battini, E; Rustici, A; Stampacchia, G
2017-07-01
The aim of this study is to investigate the effects of an integrated gait rehabilitation training based on Functional Electrical Stimulation (FES)-cycling and overground robotic exoskeleton in a group of seven complete spinal cord injury patients on spasticity and patient-robot interaction. They underwent a robot-assisted rehabilitation training based on two phases: n=20 sessions of FES-cycling followed by n= 20 sessions of robot-assisted gait training based on an overground robotic exoskeleton. The following clinical outcome measures were used: Modified Ashworth Scale (MAS), Numerical Rating Scale (NRS) on spasticity, Penn Spasm Frequency Scale (PSFS), Spinal Cord Independence Measure Scale (SCIM), NRS on pain and International Spinal Cord Injury Pain Data Set (ISCI). Clinical outcome measures were assessed before (T0) after (T1) the FES-cycling training and after (T2) the powered overground gait training. The ability to walk when using exoskeleton was assessed by means of 10 Meter Walk Test (10MWT), 6 Minute Walk Test (6MWT), Timed Up and Go test (TUG), standing time, walking time and number of steps. Statistically significant changes were found on the MAS score, NRS-spasticity, 6MWT, TUG, standing time and number of steps. The preliminary results of this study show that an integrated gait rehabilitation training based on FES-cycling and overground robotic exoskeleton in complete SCI patients can provide a significant reduction of spasticity and improvements in terms of patient-robot interaction.
Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network
NASA Astrophysics Data System (ADS)
Yang, Bin
2017-07-01
Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately
Aymone, A C B; Valente, V L S; de Araújo, A M
2013-09-01
Usually the literature on Heliconius show three types of scales, classified based on the correlation between color and ultrastructure: type I - white and yellow, type II - black, and type III - orange and red. The ultrastructure of the scales located at the silvery/brownish surfaces of males/females is for the first time described in this paper. Besides, we describe the ontogeny of pigmentation, the scale morphogenesis and the maturation timing of scales fated to different colors in Heliconius erato phyllis. The silvery/brownish surfaces showed ultrastructurally similar scales to the type I, II and III. The ontogeny of pigmentation follows the sequence red, black, silvery/brownish and yellow. The maturation of yellow-fated scales, however, occurred simultaneously with the red-fated scales, before the pigmentation becomes visible. In spite of the scales at the silvery/brownish surfaces being ultrastructurally similar to the yellow, red and black scales, they mature after them; this suggests that the maturation timing does not show a relationship with the scale ultrastructure, with the deposition timing of the yellow pigment. The analysis of H. erato phyllis scale morphogenesis, as well as the scales ultrastructure and maturation timing, provided new findings into the developmental architecture of color pattern in Heliconius. Copyright © 2013 Elsevier Ltd. All rights reserved.
Time-calibrated Milankovitch cycles for the late Permian.
Wu, Huaichun; Zhang, Shihong; Hinnov, Linda A; Jiang, Ganqing; Feng, Qinglai; Li, Haiyan; Yang, Tianshui
2013-01-01
An important innovation in the geosciences is the astronomical time scale. The astronomical time scale is based on the Milankovitch-forced stratigraphy that has been calibrated to astronomical models of paleoclimate forcing; it is defined for much of Cenozoic-Mesozoic. For the Palaeozoic era, however, astronomical forcing has not been widely explored because of lack of high-precision geochronology or astronomical modelling. Here we report Milankovitch cycles from late Permian (Lopingian) strata at Meishan and Shangsi, South China, time calibrated by recent high-precision U-Pb dating. The evidence extends empirical knowledge of Earth's astronomical parameters before 250 million years ago. Observed obliquity and precession terms support a 22-h length-of-day. The reconstructed astronomical time scale indicates a 7.793-million year duration for the Lopingian epoch, when strong 405-kyr cycles constrain astronomical modelling. This is the first significant advance in defining the Palaeozoic astronomical time scale, anchored to absolute time, bridging the Palaeozoic-Mesozoic transition.
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
Multiscale recurrence quantification analysis of order recurrence plots
NASA Astrophysics Data System (ADS)
Xu, Mengjia; Shang, Pengjian; Lin, Aijing
2017-03-01
In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.
Scale Dependence of Spatiotemporal Intermittence of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Siddani, Ravi K.
2011-01-01
It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
Model-based strategy for cell culture seed train layout verified at lab scale.
Kern, Simon; Platas-Barradas, Oscar; Pörtner, Ralf; Frahm, Björn
2016-08-01
Cell culture seed trains-the generation of a sufficient viable cell number for the inoculation of the production scale bioreactor, starting from incubator scale-are time- and cost-intensive. Accordingly, a seed train offers potential for optimization regarding its layout and the corresponding proceedings. A tool has been developed to determine the optimal points in time for cell passaging from one scale into the next and it has been applied to two different cell lines at lab scale, AGE1.HN AAT and CHO-K1. For evaluation, experimental seed train realization has been evaluated in comparison to its layout. In case of the AGE1.HN AAT cell line, the results have also been compared to the formerly manually designed seed train. The tool provides the same seed train layout based on the data of only two batches.
Ghadiri, Elham; Zakeeruddin, Shaik M.; Hagfeldt, Anders; Grätzel, Michael; Moser, Jacques-E.
2016-01-01
Efficient dye-sensitized solar cells are based on highly diffusive mesoscopic layers that render these devices opaque and unsuitable for ultrafast transient absorption spectroscopy measurements in transmission mode. We developed a novel sub-200 femtosecond time-resolved diffuse reflectance spectroscopy scheme combined with potentiostatic control to study various solar cells in fully operational condition. We studied performance optimized devices based on liquid redox electrolytes and opaque TiO2 films, as well as other morphologies, such as TiO2 fibers and nanotubes. Charge injection from the Z907 dye in all TiO2 morphologies was observed to take place in the sub-200 fs time scale. The kinetics of electron-hole back recombination has features in the picosecond to nanosecond time scale. This observation is significantly different from what was reported in the literature where the electron-hole back recombination for transparent films of small particles is generally accepted to occur on a longer time scale of microseconds. The kinetics of the ultrafast electron injection remained unchanged for voltages between +500 mV and –690 mV, where the injection yield eventually drops steeply. The primary charge separation in Y123 organic dye based devices was clearly slower occurring in two picoseconds and no kinetic component on the shorter femtosecond time scale was recorded. PMID:27095505
Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B
2011-01-01
In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
Spatio-temporal hierarchy in the dynamics of a minimalist protein model
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen
2013-12-01
A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.
Coevolution of strategy-selection time scale and cooperation in spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Rong, Zhihai; Wu, Zhi-Xi; Chen, Guanrong
2013-06-01
In this paper, we investigate a networked prisoner's dilemma game where individuals' strategy-selection time scale evolves based on their historical learning information. We show that the more times the current strategy of an individual is learnt by his neighbors, the longer time he will stick on the successful behavior by adaptively adjusting the lifetime of the adopted strategy. Through characterizing the extent of success of the individuals with normalized payoffs, we show that properly using the learned information can form a positive feedback mechanism between cooperative behavior and its lifetime, which can boost cooperation on square lattices and scale-free networks.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
2018-05-01
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH
NASA Astrophysics Data System (ADS)
Lee, D.; Gopal, S.; Mohapatra, P.
2012-07-01
We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.
NASA Astrophysics Data System (ADS)
Hernández Forero, Liz Catherine; Bahamón Cortés, Nelson
2017-06-01
Around the world, there are different providers of timestamp (mobile, radio or television operators, satellites of the GPS network, astronomical measurements, etc.), however, the source of the legal time for a country is either the national metrology institute or another designated laboratory. This activity requires a time standard based on an atomic time scale. The International Bureau of Weights and Measures (BIPM) calculates a weighted average of the time kept in more than 60 nations and produces a single international time scale, called Coordinated Universal Time (UTC). This article presents the current time scale that generates Legal Time for the Republic of Colombia produced by the Instituto Nacional de Metrología (INM) using the time and frequency national standard, a cesium atomic oscillator. It also illustrates how important it is for the academic, scientific and industrial communities, as well as the general public, to be synchronized with this time scale, which is traceable to the International System (SI) of units, through international comparisons that are made in real time.
Asynchronous adaptive time step in quantitative cellular automata modeling
Zhu, Hao; Pang, Peter YH; Sun, Yan; Dhar, Pawan
2004-01-01
Background The behaviors of cells in metazoans are context dependent, thus large-scale multi-cellular modeling is often necessary, for which cellular automata are natural candidates. Two related issues are involved in cellular automata based multi-cellular modeling: how to introduce differential equation based quantitative computing to precisely describe cellular activity, and upon it, how to solve the heavy time consumption issue in simulation. Results Based on a modified, language based cellular automata system we extended that allows ordinary differential equations in models, we introduce a method implementing asynchronous adaptive time step in simulation that can considerably improve efficiency yet without a significant sacrifice of accuracy. An average speedup rate of 4–5 is achieved in the given example. Conclusions Strategies for reducing time consumption in simulation are indispensable for large-scale, quantitative multi-cellular models, because even a small 100 × 100 × 100 tissue slab contains one million cells. Distributed and adaptive time step is a practical solution in cellular automata environment. PMID:15222901
Peng, Sijia; Wang, Wenjuan; Chen, Chunlai
2018-05-10
Fluorescence correlation spectroscopy is a powerful single-molecule tool that is able to capture kinetic processes occurring at the nanosecond time scale. However, the upper limit of its time window is restricted by the dwell time of the molecule of interest in the confocal detection volume, which is usually around submilliseconds for a freely diffusing biomolecule. Here, we present a simple and easy-to-implement method, named surface transient binding-based fluorescence correlation spectroscopy (STB-FCS), which extends the upper limit of the time window to seconds. We further demonstrated that STB-FCS enables capture of both intramolecular and intermolecular kinetic processes whose time scales cross several orders of magnitude.
Astrophysical implications of hypothetical stable TeV-scale black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giddings, Steven B.; Mangano, Michelangelo L.
2008-08-01
We analyze macroscopic effects of TeV-scale black holes, such as could possibly be produced at the LHC, in what is regarded as an extremely hypothetical scenario in which they are stable and, if trapped inside Earth, begin to accrete matter. We examine a wide variety of TeV-scale gravity scenarios, basing the resulting accretion models on first-principles, basic, and well-tested physical laws. These scenarios fall into two classes, depending on whether accretion could have any macroscopic effect on the Earth at times shorter than the Sun's natural lifetime. We argue that cases with such an effect at shorter times than themore » solar lifetime are ruled out, since in these scenarios black holes produced by cosmic rays impinging on much denser white dwarfs and neutron stars would then catalyze their decay on time scales incompatible with their known lifetimes. We also comment on relevant lifetimes for astronomical objects that capture primordial black holes. In short, this study finds no basis for concerns that TeV-scale black holes from the LHC could pose a risk to Earth on time scales shorter than the Earth's natural lifetime. Indeed, conservative arguments based on detailed calculations and the best-available scientific knowledge, including solid astronomical data, conclude, from multiple perspectives, that there is no risk of any significance whatsoever from such black holes.« less
Divisions of geologic time-major chronostratigraphic and geochronologic units
,
2010-01-01
Effective communication in the geosciences requires consistent uses of stratigraphic nomenclature, especially divisions of geologic time. A geologic time scale is composed of standard stratigraphic divisions based on rock sequences and is calibrated in years. Over the years, the development of new dating methods and the refinement of previous methods have stimulated revisions to geologic time scales. Advances in stratigraphy and geochronology require that any time scale be periodically updated. Therefore, Divisions of Geologic Time, which shows the major chronostratigraphic (position) and geochronologic (time) units, is intended to be a dynamic resource that will be modified to include accepted changes of unit names and boundary age estimates. This fact sheet is a modification of USGS Fact Sheet 2007-3015 by the U.S. Geological Survey Geologic Names Committee.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Time Correlations and the Frequency Spectrum of Sound Radiated by Turbulent Flows
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Zhou, Ye
1997-01-01
Theories of turbulent time correlations are applied to compute frequency spectra of sound radiated by isotropic turbulence and by turbulent shear flows. The hypothesis that Eulerian time correlations are dominated by the sweeping action of the most energetic scales implies that the frequency spectrum of the sound radiated by isotropic turbulence scales as omega(exp 4) for low frequencies and as omega(exp -3/4) for high frequencies. The sweeping hypothesis is applied to an approximate theory of jet noise. The high frequency noise again scales as omega(exp -3/4), but the low frequency spectrum scales as omega(exp 2). In comparison, a classical theory of jet noise based on dimensional analysis gives omega(exp -2) and omega(exp 2) scaling for these frequency ranges. It is shown that the omega(exp -2) scaling is obtained by simplifying the description of turbulent time correlations. An approximate theory of the effect of shear on turbulent time correlations is developed and applied to the frequency spectrum of sound radiated by shear turbulence. The predicted steepening of the shear dominated spectrum appears to be consistent with jet noise measurements.
Pankavich, S; Ortoleva, P
2010-06-01
The multiscale approach to N-body systems is generalized to address the broad continuum of long time and length scales associated with collective behaviors. A technique is developed based on the concept of an uncountable set of time variables and of order parameters (OPs) specifying major features of the system. We adopt this perspective as a natural extension of the commonly used discrete set of time scales and OPs which is practical when only a few, widely separated scales exist. The existence of a gap in the spectrum of time scales for such a system (under quasiequilibrium conditions) is used to introduce a continuous scaling and perform a multiscale analysis of the Liouville equation. A functional-differential Smoluchowski equation is derived for the stochastic dynamics of the continuum of Fourier component OPs. A continuum of spatially nonlocal Langevin equations for the OPs is also derived. The theory is demonstrated via the analysis of structural transitions in a composite material, as occurs for viral capsids and molecular circuits.
Measurement Equivalence of Teachers' Sense of Efficacy Scale Using Latent Growth Methods
ERIC Educational Resources Information Center
Basokçu, T. Oguz; Ögretmen, T.
2016-01-01
This study is based on the application of latent growth modeling, which is one of structural equation models on real data. Teachers' Sense of Efficacy Scale (TSES), which was previously adapted into Turkish was administered to 200 preservice teachers at different time intervals for three times and study data was collected. Measurement equivalence…
Large Scale Traffic Simulations
DOT National Transportation Integrated Search
1997-01-01
Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...
The potential of iRest in measuring the hand function performance of stroke patients.
Abdul Rahman, Hisyam; Khor, Kang Xiang; Yeong, Che Fai; Su, Eileen Lee Ming; Narayanan, Aqilah Leela T
2017-01-01
Clinical scales such as Fugl-Meyer Assessment (FMA) and Motor Assessment Scale (MAS) are widely used to evaluate stroke patient's motor performance. However, there are several limitations with these assessment scales such as subjectivity, lack of repeatability, time-consuming and highly depend on the ability of the physiotherapy. In contrast, robot-based assessments are objective, repeatable, and could potentially reduce the assessment time. However, robot-based assessments are not as well established as conventional assessment scale and the correlation to conventional assessment scale is unclear. This study was carried out to identify important parameters in designing tasks that efficiently assess hand function of stroke patients and to quantify potential benefits of robotic assessment modules to predict the conventional assessment score with iRest. Twelve predictive variables were explored, relating to movement time, velocity, strategy, accuracy and smoothness from three robotic assessment modules which are Draw I, Draw Diamond and Draw Circle. Regression models using up to four predictors were developed to describe the MAS. Results show that the time given should be not too long and it would affect the trajectory error. Besides, result also shows that it is possible to use iRest in predicting MAS score. There is a potential of using iRest, a non-motorized device in predicting MAS score.
Doubly stochastic Poisson process models for precipitation at fine time-scales
NASA Astrophysics Data System (ADS)
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
Engineering web maps with gradual content zoom based on streaming vector data
NASA Astrophysics Data System (ADS)
Huang, Lina; Meijers, Martijn; Šuba, Radan; van Oosterom, Peter
2016-04-01
Vario-scale data structures have been designed to support gradual content zoom and the progressive transfer of vector data, for use with arbitrary map scales. The focus to date has been on the server side, especially on how to convert geographic data into the proposed vario-scale structures by means of automated generalisation. This paper contributes to the ongoing vario-scale research by focusing on the client side and communication, particularly on how this works in a web-services setting. It is claimed that these functionalities are urgently needed, as many web-based applications, both desktop and mobile, require gradual content zoom, progressive transfer and a high performance level. The web-client prototypes developed in this paper make it possible to assess the behaviour of vario-scale data and to determine how users will actually see the interactions. Several different options of web-services communication architectures are possible in a vario-scale setting. These options are analysed and tested with various web-client prototypes, with respect to functionality, ease of implementation and performance (amount of transmitted data and response times). We show that the vario-scale data structure can fit in with current web-based architectures and efforts to standardise map distribution on the internet. However, to maximise the benefits of vario-scale data, a client needs to be aware of this structure. When a client needs a map to be refined (by means of a gradual content zoom operation), only the 'missing' data will be requested. This data will be sent incrementally to the client from a server. In this way, the amount of data transferred at one time is reduced, shortening the transmission time. In addition to these conceptual architecture aspects, there are many implementation and tooling design decisions at play. These will also be elaborated on in this paper. Based on the experiments conducted, we conclude that the vario-scale approach indeed supports gradual content zoom and the progressive web transfer of vector data. This is a big step forward in making vector data at arbitrary map scales available to larger user groups.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.
NASA Astrophysics Data System (ADS)
Kiliyanpilakkil, Velayudhan Praju
Atmospheric motions take place in spatial scales of sub-millimeters to few thousands of kilometers with temporal changes in the atmospheric variables occur in fractions of seconds to several years. Consequently, the variations in atmospheric kinetic energy associated with these atmospheric motions span over a broad spectrum of space and time. The mesoscale region acts as an energy transferring regime between the energy generating synoptic scale and the energy dissipating microscale. Therefore, the scaling characterizations of mesoscale wind fields are significant in the accurate estimation of the atmospheric energy budget. Moreover, the precise knowledge of the scaling characteristics of atmospheric mesoscale wind fields is important for the validation of the numerical models those focus on wind forecasting, dispersion, diffusion, horizontal transport, and optical turbulence. For these reasons, extensive studies have been conducted in the past to characterize the mesoscale wind fields. Nevertheless, the majority of these studies focused on near-surface and upper atmosphere mesoscale regimes. The present study attempt to identify the existence and to quantify the scaling of mesoscale wind fields in the lower atmospheric boundary layer (ABL; in the wind turbine layer) using wind observations from various research-grade instruments (e.g., sodars, anemometers). The scaling characteristics of the mesoscale wind speeds over diverse homogeneous flat terrains, conducted using structure function based analysis, revealed an altitudinal dependence of the scaling exponents. This altitudinal dependence of the wind speed scaling may be attributed to the buoyancy forcing. Subsequently, we use the framework of extended self-similarity (ESS) to characterize the observed scaling behavior. In the ESS framework, the relative scaling exponents of the mesoscale atmospheric boundary layer wind speed exhibit quasi-universal behavior; even far beyond the inertial range of turbulence (Delta t within 10 minutes to 6 hours range). The ESS framework based study is extended further to enquire its validity over complex terrain. This study, based on multiyear wind observations, demonstrate that the ESS holds for the lower ABL wind speed over the complex terrain as well. Another important inference from this study is that the ESS relative scaling exponents corresponding to the mesoscale wind speed closely matches the scaling characteristics of the inertial range turbulence, albeit not exactly identical. The current study proposes benchmark using ESS-based quasi-universal wind speed scaling characteristics in the ABL for the mesoscale modeling community. Using a state-of-the-art atmospheric mesoscale model in conjunction with different planetary boundary layer (PBL) parameterization schemes, multiple wind speed simulations have been conducted. This study reveals that the ESS scaling characteristics of the model simulated wind speed time series in the lower ABL vary significantly from their observational counterparts. The study demonstrate that the model simulated wind speed time series for the time intervals Delta t < 2 hours do not capture the ESS-based scaling characteristics. The detailed analysis of model simulations using different PBL schemes lead to the conclusion that there is a need for significant improvements in the turbulent closure parameterizations adapted in the new-generation atmospheric models. This study is unique as the ESS framework has never been reported or examined for the validation of PBL parameterizations.
Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V
2015-12-01
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.
Ram, Nilam; Conroy, David E; Pincus, Aaron L; Lorek, Amy; Rebar, Amanda; Roche, Michael J; Coccia, Michael; Morack, Jennifer; Feldman, Josh; Gerstorf, Denis
Human development is characterized by the complex interplay of processes that manifest at multiple levels of analysis and time-scales. We introduce the Intraindividual Study of Affect, Health and Interpersonal Behavior (iSAHIB) as a model for how multiple time-scale study designs facilitate more precise articulation of developmental theory. Combining age heterogeneity, longitudinal panel, daily diary, and experience sampling protocols, the study made use of smartphone and web-based technologies to obtain intensive longitudinal data from 150 persons age 18-89 years as they completed three 21-day measurement bursts ( t = 426 bursts, t = 8,557 days) wherein they provided reports on their social interactions ( t = 64,112) as they went about their daily lives. We illustrate how multiple time-scales of data can be used to articulate bioecological models of development and the interplay among more 'distal' processes that manifest at 'slower' time-scales (e.g., age-related differences and burst-to-burst changes in mental health) and more 'proximal' processes that manifest at 'faster' time-scales (e.g., changes in context that progress in accordance with the weekly calendar and family influence processes).
ERIC Educational Resources Information Center
Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard
2014-01-01
Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…
Renz, Adina J.; Meyer, Axel; Kuraku, Shigehiro
2013-01-01
Cartilaginous fishes, divided into Holocephali (chimaeras) and Elasmoblanchii (sharks, rays and skates), occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon. PMID:23825540
Renz, Adina J; Meyer, Axel; Kuraku, Shigehiro
2013-01-01
Cartilaginous fishes, divided into Holocephali (chimaeras) and Elasmoblanchii (sharks, rays and skates), occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping
2016-11-01
Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.
Influence of the time scale on the construction of financial networks.
Emmert-Streib, Frank; Dehmer, Matthias
2010-09-30
In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.
Full-Scale Numerical Modeling of Turbulent Processes in the Earth's Ionosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliasson, B.; Stenflo, L.; Department of Physics, Linkoeping University, SE-581 83 Linkoeping
2008-10-15
We present a full-scale simulation study of ionospheric turbulence by means of a generalized Zakharov model based on the separation of variables into high-frequency and slow time scales. The model includes realistic length scales of the ionospheric profile and of the electromagnetic and electrostatic fields, and uses ionospheric plasma parameters relevant for high-latitude radio facilities such as Eiscat and HAARP. A nested grid numerical method has been developed to resolve the different length-scales, while avoiding severe restrictions on the time step. The simulation demonstrates the parametric decay of the ordinary mode into Langmuir and ion-acoustic waves, followed by a Langmuirmore » wave collapse and short-scale caviton formation, as observed in ionospheric heating experiments.« less
The Development of a Sport-Based Life Skills Scale for Youth to Young Adults, 11-23 Years of Age
ERIC Educational Resources Information Center
Cauthen, Hillary Ayn
2013-01-01
The purpose of this study was to develop a sport-based life skills scale that assesses 20 life skills: goal setting, time management, communication, coping, problem solving, leadership, critical thinking, teamwork, self-discipline, decision making, planning, organizing, resiliency, motivation, emotional control, patience, assertiveness, empathy,…
Modeling Complex Phenomena Using Multiscale Time Sequences
2009-08-24
measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and
NASA Astrophysics Data System (ADS)
Fernandes, Brian; Hegde, Manu; Stanish, Paul C.; Mišković, Zoran L.; Radovanovic, Pavle V.
2017-09-01
We developed a comprehensive theoretical model describing the photoluminescence decay dynamics at short and long time scales based on the donor-acceptor defect interactions in γ-Ga2O3 nanocrystals, and quantitatively determined the importance of exclusion distance and spatial distribution of defects. We allowed for donors and acceptors to be adjacent to each other or separated by different exclusion distances. The optimal exclusion distance was found to be comparable to the donor Bohr radius and have a strong effect on the photoluminescence decay curve at short times. The importance of the exclusion distance at short time scales was confirmed by Monte Carlo simulations.
Non-stationary dynamics in the bouncing ball: A wavelet perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behera, Abhinna K., E-mail: abhinna@iiserkol.ac.in; Panigrahi, Prasanta K., E-mail: pprasanta@iiserkol.ac.in; Sekar Iyengar, A. N., E-mail: ansekar.iyengar@saha.ac.in
2014-12-01
The non-stationary dynamics of a bouncing ball, comprising both periodic as well as chaotic behavior, is studied through wavelet transform. The multi-scale characterization of the time series displays clear signatures of self-similarity, complex scaling behavior, and periodicity. Self-similar behavior is quantified by the generalized Hurst exponent, obtained through both wavelet based multi-fractal detrended fluctuation analysis and Fourier methods. The scale dependent variable window size of the wavelets aptly captures both the transients and non-stationary periodic behavior, including the phase synchronization of different modes. The optimal time-frequency localization of the continuous Morlet wavelet is found to delineate the scales corresponding tomore » neutral turbulence, viscous dissipation regions, and different time varying periodic modulations.« less
Bouvignies, Guillaume; Hansen, D Flemming; Vallurupalli, Pramodh; Kay, Lewis E
2011-02-16
A method for quantifying millisecond time scale exchange in proteins is presented based on scaling the rate of chemical exchange using a 2D (15)N, (1)H(N) experiment in which (15)N dwell times are separated by short spin-echo pulse trains. Unlike the popular Carr-Purcell-Meiboom-Gill (CPMG) experiment where the effects of a radio frequency field on measured transverse relaxation rates are quantified, the new approach measures peak positions in spectra that shift as the effective exchange time regime is varied. The utility of the method is established through an analysis of data recorded on an exchanging protein-ligand system for which the exchange parameters have been accurately determined using alternative approaches. Computations establish that a combined analysis of CPMG and peak shift profiles extends the time scale that can be studied to include exchanging systems with highly skewed populations and exchange rates as slow as 20 s(-1).
Inhomogeneous scaling behaviors in Malaysian foreign currency exchange rates
NASA Astrophysics Data System (ADS)
Muniandy, S. V.; Lim, S. C.; Murugan, R.
2001-12-01
In this paper, we investigate the fractal scaling behaviors of foreign currency exchange rates with respect to Malaysian currency, Ringgit Malaysia. These time series are examined piecewise before and after the currency control imposed in 1st September 1998 using the monofractal model based on fractional Brownian motion. The global Hurst exponents are determined using the R/ S analysis, the detrended fluctuation analysis and the method of second moment using the correlation coefficients. The limitation of these monofractal analyses is discussed. The usual multifractal analysis reveals that there exists a wide range of Hurst exponents in each of the time series. A new method of modelling the multifractal time series based on multifractional Brownian motion with time-varying Hurst exponents is studied.
Multi-Scale Analysis of Trends in Northeastern Temperate Forest Springtime Phenology
NASA Astrophysics Data System (ADS)
Moon, M.; Melaas, E. K.; Sulla-menashe, D. J.; Friedl, M. A.
2017-12-01
The timing of spring leaf emergence is highly variable in many ecosystems, exerts first-order control growing season length, and significantly modulates seasonally-integrated photosynthesis. Numerous studies have reported trends toward earlier spring phenology in temperate forests, with some papers indicating that this trend is also leading to increased carbon uptake. At broad spatial scales, however, most of these studies have used data from coarse spatial resolution instruments such as MODIS, which does not resolve ecologically important landscape-scale patterns in phenology. In this work, we examine how long-term trends in spring phenology differ across three data sources acquired at different scales of measurements at the Harvard Forest in central Massachusetts. Specifically, we compared trends in the timing of phenology based on long-term in-situ measurements of phenology, estimates based on eddy-covariance measurements of net carbon uptake transition dates, and from two sources of satellite-based remote sensing (MODIS and Landsat) land surface phenology (LSP) data. Our analysis focused on the flux footprint surrounding the Harvard Forest Environmental Measurements (EMS) tower. Our results reveal clearly defined trends toward earlier springtime phenology in Landsat LSP and in the timing of tower-based net carbon uptake. However, we find no statistically significant trend in springtime phenology measured from MODIS LSP data products, possibly because the time series of MODIS observations is relatively short (13 years). The trend in tower-based transition data exhibited a larger negative value than the trend derived from Landsat LSP data (-0.42 and -0.28 days per year for 21 and 28 years, respectively). More importantly, these results have two key implications regarding how changes in spring phenology are impacting carbon uptake at landscape-scale. First, long-term trends in spring phenology can be quite different, depending on what data source is used to estimate the trend, and 2) the response of carbon uptake to climate change may be more sensitive than the response of land surface phenology itself.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Smith, Eric A.
1992-01-01
The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Dodd, Marylin J.; Cho, Maria H.; Miaskowski, Christine; Painter, Patricia L.; Paul, Steven M.; Cooper, Bruce A.; Duda, John; Krasnoff, Joanne; Bank, Kayee A.
2010-01-01
Background Few studies have evaluated an individualized home-based exercise prescription during and after cancer treatment. Objective The purpose was to evaluate the effectiveness of a home-based exercise training intervention, the PRO-SELF FATIGUE CONTROL PROGRAM on the management of cancer related fatigue. Interventions/Methods Participants (N=119) were randomized into one of three groups: Group 1 (EE) received the exercise prescription throughout the study; Group 2 (CE) received their exercise prescription after completing cancer treatment; Group 3 (CC) received usual care. Patients completed the Piper Fatigue Scale, General Sleep Disturbance Scale, Center for Epidemiological Studies-Depression scale, and Worst Pain Intensity Scale. Results All groups reported mild fatigue levels, sleep disturbance and mild pain, but not depression. Using multilevel regression analysis significant linear and quadratic trends were found for change in fatigue and pain (i.e., scores increased, then decreased over time). No group differences were found in the changing scores over time. A significant quadratic effect for the trajectory of sleep disturbance was found, but no group differences were detected over time. No significant time or group effects were found for depression. Conclusions Our home-based exercise intervention had no effect on fatigue or related symptoms associated with cancer treatment. The optimal timing of exercise remains to be determined. Implications for practice Clinicians need to be aware that some physical activity is better than none, and there is no harm in exercise as tolerated during cancer treatment. Further analysis is needed to examine the adherence to exercise. More frequent assessments of fatigue, sleep disturbance, depression, and pain may capture the effect of exercise. PMID:20467301
NASA Astrophysics Data System (ADS)
Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng
2017-02-01
To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.
NASA Astrophysics Data System (ADS)
Gros, Claudius
2017-11-01
Modern societies face the challenge that the time scale of opinion formation is continuously accelerating in contrast to the time scale of political decision making. With the latter remaining of the order of the election cycle we examine here the case that the political state of a society is determined by the continuously evolving values of the electorate. Given this assumption we show that the time lags inherent in the election cycle will inevitable lead to political instabilities for advanced democracies characterized both by an accelerating pace of opinion dynamics and by high sensibilities (political correctness) to deviations from mainstream values. Our result is based on the observation that dynamical systems become generically unstable whenever time delays become comparable to the time it takes to adapt to the steady state. The time needed to recover from external shocks grows in addition dramatically close to the transition. Our estimates for the order of magnitude of the involved time scales indicate that socio-political instabilities may develop once the aggregate time scale for the evolution of the political values of the electorate falls below 7-15 months.
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
When Should Zero Be Included on a Scale Showing Magnitude?
ERIC Educational Resources Information Center
Kozak, Marcin
2011-01-01
This article addresses an important problem of graphing quantitative data: should one include zero on the scale showing magnitude? Based on a real time series example, the problem is discussed and some recommendations are proposed.
On the upscaling of process-based models in deltaic applications
NASA Astrophysics Data System (ADS)
Li, L.; Storms, J. E. A.; Walstra, D. J. R.
2018-03-01
Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Sun, Hong-Mei; Peng, Yan-Jun; Liang, Yong-Quan; Lu, Xin-Ming
2017-07-01
Microseismic monitoring is an effective means for providing early warning of rock or coal dynamical disasters, and its first step is microseismic event detection, although low SNR microseismic signals often cannot effectively be detected by routine methods. To solve this problem, this paper presents permutation entropy and a support vector machine to detect low SNR microseismic events. First, an extraction method of signal features based on multi-scale permutation entropy is proposed by studying the influence of the scale factor on the signal permutation entropy. Second, the detection model of low SNR microseismic events based on the least squares support vector machine is built by performing a multi-scale permutation entropy calculation for the collected vibration signals, constructing a feature vector set of signals. Finally, a comparative analysis of the microseismic events and noise signals in the experiment proves that the different characteristics of the two can be fully expressed by using multi-scale permutation entropy. The detection model of microseismic events combined with the support vector machine, which has the features of high classification accuracy and fast real-time algorithms, can meet the requirements of online, real-time extractions of microseismic events.
Prien, Justin M; Prater, Bradley D; Qin, Qiang; Cockrill, Steven L
2010-02-15
Fast, sensitive, robust methods for "high-level" glycan screening are necessary during various stages of a biotherapeutic product's lifecycle, including clone selection, process changes, and quality control for lot release testing. Traditional glycan screening involves chromatographic or electrophoretic separation-based methods, and, although reproducible, these methods can be time-consuming. Even ultrahigh-performance chromatographic and microfluidic integrated LC/MS systems, which work on the tens of minute time scale, become lengthy when hundreds of samples are to be analyzed. Comparatively, a direct infusion mass spectrometry (MS)-based glycan screening method acquires data on a millisecond time scale, exhibits exquisite sensitivity and reproducibility, and is amenable to automated peak annotation. In addition, characterization of glycan species via sequential mass spectrometry can be performed simultaneously. Here, we demonstrate a quantitative high-throughput MS-based mapping approach using stable isotope 2-aminobenzoic acid (2-AA) for rapid "high-level" glycan screening.
a Structure of Experienced Time
NASA Astrophysics Data System (ADS)
Havel, Ivan M.
2005-10-01
The subjective experience of time will be taken as a primary motivation for an alternative, essentially discontinuous conception of time. Two types of such experience will be discussed, one based on personal episodic memory, the other on the theoretical fine texture of experienced time below the threshold of phenomenal awareness. The former case implies a discrete structure of temporal episodes on a large scale, while the latter case suggests endowing psychological time with a granular structure on a small scale, i.e. interpreting it as a semi-ordered flow of smeared (not point-like) subliminal time grains. Only on an intermediate temporal scale would the subjectively felt continuity and fluency of time emerge. Consequently, there is no locally smooth mapping of phenomenal time onto the real number continuum. Such a model has certain advantages; for instance, it avoids counterintuitive interpretations of some neuropsychological experiments (e.g. Libet's measurement) in which the temporal order of events is crucial.
NASA Astrophysics Data System (ADS)
Tai, Y.; Watanabe, T.; Nagata, K.
2018-03-01
A mixing volume model (MVM) originally proposed for molecular diffusion in incompressible flows is extended as a model for molecular diffusion and thermal conduction in compressible turbulence. The model, established for implementation in Lagrangian simulations, is based on the interactions among spatially distributed notional particles within a finite volume. The MVM is tested with the direct numerical simulation of compressible planar jets with the jet Mach number ranging from 0.6 to 2.6. The MVM well predicts molecular diffusion and thermal conduction for a wide range of the size of mixing volume and the number of mixing particles. In the transitional region of the jet, where the scalar field exhibits a sharp jump at the edge of the shear layer, a smaller mixing volume is required for an accurate prediction of mean effects of molecular diffusion. The mixing time scale in the model is defined as the time scale of diffusive effects at a length scale of the mixing volume. The mixing time scale is well correlated for passive scalar and temperature. Probability density functions of the mixing time scale are similar for molecular diffusion and thermal conduction when the mixing volume is larger than a dissipative scale because the mixing time scale at small scales is easily affected by different distributions of intermittent small-scale structures between passive scalar and temperature. The MVM with an assumption of equal mixing time scales for molecular diffusion and thermal conduction is useful in the modeling of the thermal conduction when the modeling of the dissipation rate of temperature fluctuations is difficult.
Statistical geometric affinity in human brain electric activity
NASA Astrophysics Data System (ADS)
Chornet-Lurbe, A.; Oteo, J. A.; Ros, J.
2007-05-01
The representation of the human electroencephalogram (EEG) records by neurophysiologists demands standardized time-amplitude scales for their correct conventional interpretation. In a suite of graphical experiments involving scaling affine transformations we have been able to convert electroencephalogram samples corresponding to any particular sleep phase and relaxed wakefulness into each other. We propound a statistical explanation for that finding in terms of data collapse. As a sequel, we determine characteristic time and amplitude scales and outline a possible physical interpretation. An analysis for characteristic times based on lacunarity is also carried out as well as a study of the synchrony between left and right EEG channels.
NASA Astrophysics Data System (ADS)
Liu, Xueyong; An, Haizhong; Huang, Shupei; Wen, Shaobo
2017-01-01
Aiming to investigate the evolution of mean and volatility spillovers between oil and stock markets in the time and frequency dimensions, we employed WTI crude oil prices, the S&P 500 (USA) index and the MICEX index (Russia) for the period Jan. 2003-Dec. 2014 as sample data. We first applied a wavelet-based GARCH-BEKK method to examine the spillover features in frequency dimension. To consider the evolution of spillover effects in time dimension at multiple-scales, we then divided the full sample period into three sub-periods, pre-crisis period, crisis period, and post-crisis period. The results indicate that spillover effects vary across wavelet scales in terms of strength and direction. By analysis the time-varying linkage, we found the different evolution features of spillover effects between the Oil-US stock market and Oil-Russia stock market. The spillover relationship between oil and US stock market is shifting to short-term while the spillover relationship between oil and Russia stock market is changing to all time scales. That result implies that the linkage between oil and US stock market is weakening in the long-term, and the linkage between oil and Russia stock market is getting close in all time scales. This may explain the phenomenon that the US stock index and the Russia stock index showed the opposite trend with the falling of oil price in the post-crisis period.
Fluctuation scaling of quotation activities in the foreign exchange market
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro; Nishimura, Maiko; Hołyst, Janusz A.
2010-07-01
We study the scaling behavior of quotation activities for various currency pairs in the foreign exchange market. The components’ centrality is estimated from multiple time series and visualized as a currency pair network. The power-law relationship between a mean of quotation activity and its standard deviation for each currency pair is found. The scaling exponent α and the ratio between common and specific fluctuations η increase with the length of the observation time window Δt. The result means that although for Δt=1 (min), the market dynamics are governed by specific processes, and at a longer time scale Δt>100 (min) the common information flow becomes more important. We point out that quotation activities are not independently Poissonian for Δt=1 (min), and temporally or mutually correlated activities of quotations can happen even at this time scale. A stochastic model for the foreign exchange market based on a bipartite graph representation is proposed.
Periodicity and Multi-scale Analysis of Runoff and Sediment Load in the Wulanghe River, Jinsha River
NASA Astrophysics Data System (ADS)
Chen, Yiming
2018-01-01
Based on the annual runoff and sediment data (1959-2014 ) of Zongguantian hydrological station, time-frequency wavelet transform characteristics and their periodic rules of high and low flow alternating change were analyzed in multi-time scales by the Morlet continue wavelet transformation (CWT). It is concluded that the primary periods of runoff and sediment load time series of the high and low annual flow in the different time scales were 12-year, 3-year and 26-year, 18-year, 13-year, 5-year, respectively, and predicted that the major variant trend of the two time series would been gradually decreasing and been in the high flow period around 8-year (from 2014 to 2022) and 10-year (from 2014 to 2020).
Towards a critical transition theory under different temporal scales and noise strengths
NASA Astrophysics Data System (ADS)
Shi, Jifan; Li, Tiejun; Chen, Luonan
2016-03-01
The mechanism of critical phenomena or critical transitions has been recently studied from various aspects, in particular considering slow parameter change and small noise. In this article, we systematically classify critical transitions into three types based on temporal scales and noise strengths of dynamical systems. Specifically, the classification is made by comparing three important time scales τλ, τtran, and τergo, where τλ is the time scale of parameter change (e.g., the change of environment), τtran is the time scale when a particle or state transits from a metastable state into another, and τergo is the time scale when the system becomes ergodic. According to the time scales, we classify the critical transition behaviors as three types, i.e., state transition, basin transition, and distribution transition. Moreover, for each type of transition, there are two cases, i.e., single-trajectory transition and multitrajectory ensemble transition, which correspond to the transition of individual behavior and population behavior, respectively. We also define the critical point for each type of critical transition, derive several properties, and further propose the indicators for predicting critical transitions with numerical simulations. In addition, we show that the noise-to-signal ratio is effective to make the classification of critical transitions for real systems.
Tan, Zhengguo; Hohage, Thorsten; Kalentev, Oleksandr; Joseph, Arun A; Wang, Xiaoqing; Voit, Dirk; Merboldt, K Dietmar; Frahm, Jens
2017-12-01
The purpose of this work is to develop an automatic method for the scaling of unknowns in model-based nonlinear inverse reconstructions and to evaluate its application to real-time phase-contrast (RT-PC) flow magnetic resonance imaging (MRI). Model-based MRI reconstructions of parametric maps which describe a physical or physiological function require the solution of a nonlinear inverse problem, because the list of unknowns in the extended MRI signal equation comprises multiple functional parameters and all coil sensitivity profiles. Iterative solutions therefore rely on an appropriate scaling of unknowns to numerically balance partial derivatives and regularization terms. The scaling of unknowns emerges as a self-adjoint and positive-definite matrix which is expressible by its maximal eigenvalue and solved by power iterations. The proposed method is applied to RT-PC flow MRI based on highly undersampled acquisitions. Experimental validations include numerical phantoms providing ground truth and a wide range of human studies in the ascending aorta, carotid arteries, deep veins during muscular exercise and cerebrospinal fluid during deep respiration. For RT-PC flow MRI, model-based reconstructions with automatic scaling not only offer velocity maps with high spatiotemporal acuity and much reduced phase noise, but also ensure fast convergence as well as accurate and precise velocities for all conditions tested, i.e. for different velocity ranges, vessel sizes and the simultaneous presence of signals with velocity aliasing. In summary, the proposed automatic scaling of unknowns in model-based MRI reconstructions yields quantitatively reliable velocities for RT-PC flow MRI in various experimental scenarios. Copyright © 2017 John Wiley & Sons, Ltd.
Orbital time scale and new C-isotope record for Cenomanian-Turonian boundary stratotype
NASA Astrophysics Data System (ADS)
Sageman, Bradley B.; Meyers, Stephen R.; Arthur, Michael A.
2006-02-01
Previous time scales for the Cenomanian-Turonian boundary (CTB) interval containing Oceanic Anoxic Event II (OAE II) vary by a factor of three. In this paper we present a new orbital time scale for the CTB stratotype established independently of radiometric, biostratigraphic, or geochemical data sets, update revisions of CTB biostratigraphic zonation, and provide a new detailed carbon isotopic record for the CTB study interval. The orbital time scale allows an independent assessment of basal biozone ages relative to the new CTB date of 93.55 Ma (GTS04). The δ13Corg data document the abrupt onset of OAE II, significant variability in δ13Corg values, and values enriched to almost -22‰. These new data underscore the difficulty in defining OAE II termination. Using the new isotope curve and time scale, estimates of OAE II duration can be determined and exported to other sites based on integration of well-established chemostratigraphic and biostratigraphic datums. The new data will allow more accurate calculations of biogeochemical and paleobiologic rates across the CTB.
A Sub-ps Stability Time Transfer Method Based on Optical Modems.
Frank, Florian; Stefani, Fabio; Tuckey, Philip; Pottie, Paul-Eric
2018-06-01
Coherent optical fiber links recently demonstrate their ability to compare the most advanced optical clocks over a continental scale. The outstanding performances of the optical clocks are stimulating the community to build much more stable time scales, and to develop the means to compare them. Optical fiber link is one solution that needs to be explored. Here, we are investigating a new method to transfer time based on an optical demodulation of a phase step imprint onto the optical carrier. We show the implementation of a proof-of-principle experiment over 86-km urban fiber, and report time interval transfer stability of 1 pulse per second signal with sub-ps resolution from 10 s to one day of measurement time. Prospects for future development and implementation in active telecommunication networks, not only regarding performance but also compatibility, conclude this paper.
NASA Astrophysics Data System (ADS)
Lifton, N. A.
2014-12-01
A recently published cosmogenic nuclide production rate scaling model based on analytical fits to Monte Carlo simulations of atmospheric cosmic ray flux spectra (both of which agree well with measured spectra) (Lifton et al., 2014, Earth Planet. Sci. Lett. 386, 149-160: termed the LSD model) provides two main advantages over previous scaling models: identification and quantification of potential sources of bias in the earlier models, and the ability to generate nuclide-specific scaling factors easily for a wide range of input parameters. The new model also provides a flexible framework for exploring the implications of advances in model inputs. In this work, the scaling implications of two recent time-dependent spherical harmonic geomagnetic models spanning the Holocene will be explored. Korte and Constable (2011, Phys. Earth Planet. Int. 188, 247-259) and Korte et al. (2011, Earth Planet. Sci. Lett. 312, 497-505) recently updated earlier spherical harmonic paleomagnetic models used by Lifton et al. (2014) with paleomagnetic measurements from sediment cores in addition to archeomagnetic and volcanic data. These updated models offer improved accuracy over the previous versions, in part to due to increased temporal and spatial data coverage. With the new models as input, trajectory-traced estimates of effective vertical cutoff rigidity (RC- the standard method for ordering cosmic ray data) yield significantly different time-integrated scaling predictions when compared to the earlier models. These results will be compared to scaling predictions using another recent time-dependent spherical harmonic model of the Holocene geomagnetic field by Pavón-Carrasco et al. (2014, Earth Planet. Sci. Lett. 388, 98-109), based solely on archeomagnetic and volcanic paleomagnetic data, but extending to 14 ka. In addition, the potential effects of time-dependent atmospheric models on LSD scaling predictions will be presented. Given the typical dominance of altitudinal over latitudinal scaling effects on cosmogenic nuclide production, incorporating transient global simulations of atmospheric structure (e.g., Liu et al., 2009, Science 325, 310-314) into scaling frameworks may contribute to improved understanding of long-term production rate variations.
Deviations from uniform power law scaling in nonstationary time series
NASA Technical Reports Server (NTRS)
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Scaling and modeling of turbulent suspension flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
1989-01-01
Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.
Controllability of multiplex, multi-time-scale networks
NASA Astrophysics Data System (ADS)
Pósfai, Márton; Gao, Jianxi; Cornelius, Sean P.; Barabási, Albert-László; D'Souza, Raissa M.
2016-09-01
The paradigm of layered networks is used to describe many real-world systems, from biological networks to social organizations and transportation systems. While recently there has been much progress in understanding the general properties of multilayer networks, our understanding of how to control such systems remains limited. One fundamental aspect that makes this endeavor challenging is that each layer can operate at a different time scale; thus, we cannot directly apply standard ideas from structural control theory of individual networks. Here we address the problem of controlling multilayer and multi-time-scale networks focusing on two-layer multiplex networks with one-to-one interlayer coupling. We investigate the practically relevant case when the control signal is applied to the nodes of one layer. We develop a theory based on disjoint path covers to determine the minimum number of inputs (Ni) necessary for full control. We show that if both layers operate on the same time scale, then the network structure of both layers equally affect controllability. In the presence of time-scale separation, controllability is enhanced if the controller interacts with the faster layer: Ni decreases as the time-scale difference increases up to a critical time-scale difference, above which Ni remains constant and is completely determined by the faster layer. We show that the critical time-scale difference is large if layer I is easy and layer II is hard to control in isolation. In contrast, control becomes increasingly difficult if the controller interacts with the layer operating on the slower time scale and increasing time-scale separation leads to increased Ni, again up to a critical value, above which Ni still depends on the structure of both layers. This critical value is largely determined by the longest path in the faster layer that does not involve cycles. By identifying the underlying mechanisms that connect time-scale difference and controllability for a simplified model, we provide crucial insight into disentangling how our ability to control real interacting complex systems is affected by a variety of sources of complexity.
NASA Astrophysics Data System (ADS)
Moore, Keegan J.; Bunyan, Jonathan; Tawfick, Sameh; Gendelman, Oleg V.; Li, Shuangbao; Leamy, Michael; Vakakis, Alexander F.
2018-01-01
In linear time-invariant dynamical and acoustical systems, reciprocity holds by the Onsager-Casimir principle of microscopic reversibility, and this can be broken only by odd external biases, nonlinearities, or time-dependent properties. A concept is proposed in this work for breaking dynamic reciprocity based on irreversible nonlinear energy transfers from large to small scales in a system with nonlinear hierarchical internal structure, asymmetry, and intentional strong stiffness nonlinearity. The resulting nonreciprocal large-to-small scale energy transfers mimic analogous nonlinear energy transfer cascades that occur in nature (e.g., in turbulent flows), and are caused by the strong frequency-energy dependence of the essentially nonlinear small-scale components of the system considered. The theoretical part of this work is mainly based on action-angle transformations, followed by direct numerical simulations of the resulting system of nonlinear coupled oscillators. The experimental part considers a system with two scales—a linear large-scale oscillator coupled to a small scale by a nonlinear spring—and validates the theoretical findings demonstrating nonreciprocal large-to-small scale energy transfer. The proposed study promotes a paradigm for designing nonreciprocal acoustic materials harnessing strong nonlinearity, which in a future application will be implemented in designing lattices incorporating nonlinear hierarchical internal structures, asymmetry, and scale mixing.
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
The two articles in this issue of the European Physical Journal Special Topics cover topics in Econophysics and GPU computing in the last years. In the first article [1], the formation of market prices for financial assets is described which can be understood as superposition of individual actions of market participants, in which they provide cumulative supply and demand. This concept of macroscopic properties emerging from microscopic interactions among the various subcomponents of the overall system is also well-known in statistical physics. The distribution of price changes in financial markets is clearly non-Gaussian leading to distinct features of the price process, such as scaling behavior, non-trivial correlation functions and clustered volatility. This article focuses on the analysis of financial time series and their correlations. A method is used for quantifying pattern based correlations of a time series. With this methodology, evidence is found that typical behavioral patterns of financial market participants manifest over short time scales, i.e., that reactions to given price patterns are not entirely random, but that similar price patterns also cause similar reactions. Based on the investigation of the complex correlations in financial time series, the question arises, which properties change when switching from a positive trend to a negative trend. An empirical quantification by rescaling provides the result that new price extrema coincide with a significant increase in transaction volume and a significant decrease in the length of corresponding time intervals between transactions. These findings are independent of the time scale over 9 orders of magnitude, and they exhibit characteristics which one can also find in other complex systems in nature (and in physical systems in particular). These properties are independent of the markets analyzed. Trends that exist only for a few seconds show the same characteristics as trends on time scales of several months. Thus, it is possible to study financial bubbles and their collapses in more detail, because trend switching processes occur with higher frequency on small time scales. In addition, a Monte Carlo based simulation of financial markets is analyzed and extended in order to reproduce empirical features and to gain insight into their causes. These causes include both financial market microstructure and the risk aversion of market participants.
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
Modeling specific action potentials in the human atria based on a minimal single-cell model.
Richter, Yvonne; Lind, Pedro G; Maass, Philipp
2018-01-01
We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.
On Which Microphysical Time Scales to Use in Studies of Entrainment-Mixing Mechanisms in Clouds
Lu, Chunsong; Liu, Yangang; Zhu, Bin; ...
2018-03-23
The commonly used time scales in entrainment-mixing studies are examined in this paper to seek the most appropriate one, based on aircraft observations of cumulus clouds from the RACORO campaign and numerical simulations with the Explicit Mixing Parcel Model. The time scales include: τ evap, the time for droplet complete evaporation; τ phase, the time for saturation ratio deficit (S) to reach 1/e of its initial value; τ satu, the time for S to reach -0.5%; τ react, the time for complete droplet evaporation or S to reach -0.5%. It is found that the proper time scale to use dependsmore » on the specific objectives of entrainment-mixing studies. First, if the focus is on the variations of liquid water content (LWC) and S, then τ react for saturation, τ satu and τ phase are almost equivalently appropriate, because they all represent the rate of dry air reaching saturation or of LWC decrease. Second, if one focuses on the variations of droplet size and number concentration, τ react for complete evaporation and τ evap are proper because they characterize how fast droplets evaporate and whether number concentration decreases. Moreover, τ react for complete evaporation and τ evap are always positively correlated with homogeneous mixing degree (ψ), thus the two time scales, especially τ evap, are recommended for developing parameterizations. However, ψ and the other time scales can be negatively, positively, or not correlated, depending on the dominant factors of the entrained air (i.e., relative humidity or aerosols). Third and finally, all time scales are proportional to each other under certain microphysical and thermodynamic conditions.« less
On Which Microphysical Time Scales to Use in Studies of Entrainment-Mixing Mechanisms in Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Chunsong; Liu, Yangang; Zhu, Bin
The commonly used time scales in entrainment-mixing studies are examined in this paper to seek the most appropriate one, based on aircraft observations of cumulus clouds from the RACORO campaign and numerical simulations with the Explicit Mixing Parcel Model. The time scales include: τ evap, the time for droplet complete evaporation; τ phase, the time for saturation ratio deficit (S) to reach 1/e of its initial value; τ satu, the time for S to reach -0.5%; τ react, the time for complete droplet evaporation or S to reach -0.5%. It is found that the proper time scale to use dependsmore » on the specific objectives of entrainment-mixing studies. First, if the focus is on the variations of liquid water content (LWC) and S, then τ react for saturation, τ satu and τ phase are almost equivalently appropriate, because they all represent the rate of dry air reaching saturation or of LWC decrease. Second, if one focuses on the variations of droplet size and number concentration, τ react for complete evaporation and τ evap are proper because they characterize how fast droplets evaporate and whether number concentration decreases. Moreover, τ react for complete evaporation and τ evap are always positively correlated with homogeneous mixing degree (ψ), thus the two time scales, especially τ evap, are recommended for developing parameterizations. However, ψ and the other time scales can be negatively, positively, or not correlated, depending on the dominant factors of the entrained air (i.e., relative humidity or aerosols). Third and finally, all time scales are proportional to each other under certain microphysical and thermodynamic conditions.« less
Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi
2018-02-01
The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Classification of Animal Movement Behavior through Residence in Space and Time.
Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R
2017-01-01
Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.
Estimating Agricultural Nitrous Oxide Emissions
USDA-ARS?s Scientific Manuscript database
Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
Influence of the Time Scale on the Construction of Financial Networks
Emmert-Streib, Frank; Dehmer, Matthias
2010-01-01
Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
NASA Astrophysics Data System (ADS)
Moon, Seulgi; Shelef, Eitan; Hilley, George E.
2015-05-01
In this study, we model postglacial surface processes and examine the evolution of the topography and denudation rates within the deglaciated Washington Cascades to understand the controls on and time scales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-time scale denudation rates measured from cosmogenic 10Be isotopes. The probability distributions of those model parameters calculated based on a Bayesian inversion scheme show comparable ranges from previous studies in similar rock types and climatic conditions. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), whereas precipitation and slopes affect the spatial variation in landslide denudation rates. Simulation results show that postglacial denudation rates decay over time and take longer than 100 kyr to reach time-invariant rates. Over time, the landslides in the model consume the steep slopes characteristic of deglaciated landscapes. This response time scale is on the order of or longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may produce a significant and prolonged impact on denudation and topography.
Modeling Climate Responses to Spectral Solar Forcing on Centennial and Decadal Time Scales
NASA Technical Reports Server (NTRS)
Wen, G.; Cahalan, R.; Rind, D.; Jonas, J.; Pilewskie, P.; Harder, J.
2012-01-01
We report a series of experiments to explore clima responses to two types of solar spectral forcing on decadal and centennial time scales - one based on prior reconstructions, and another implied by recent observations from the SORCE (Solar Radiation and Climate Experiment) SIM (Spectral 1rradiance Monitor). We apply these forcings to the Goddard Institute for Space Studies (GISS) Global/Middle Atmosphere Model (GCMAM). that couples atmosphere with ocean, and has a model top near the mesopause, allowing us to examine the full response to the two solar forcing scenarios. We show different climate responses to the two solar forCing scenarios on decadal time scales and also trends on centennial time scales. Differences between solar maximum and solar minimum conditions are highlighted, including impacts of the time lagged reSponse of the lower atmosphere and ocean. This contrasts with studies that assume separate equilibrium conditions at solar maximum and minimum. We discuss model feedback mechanisms involved in the solar forced climate variations.
NASA Astrophysics Data System (ADS)
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.
Extracting information from AGN variability
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.
2017-09-01
Active galactic nuclei (AGNs) exhibit rapid, high-amplitude stochastic flux variations across the entire electromagnetic spectrum on time-scales ranging from hours to years. The cause of this variability is poorly understood. We present a Green's function-based method for using variability to (1) measure the time-scales on which flux perturbations evolve and (2) characterize the driving flux perturbations. We model the observed light curve of an AGN as a linear differential equation driven by stochastic impulses. We analyse the light curve of the Kepler AGN Zw 229-15 and find that the observed variability behaviour can be modelled as a damped harmonic oscillator perturbed by a coloured noise process. The model power spectrum turns over on time-scale 385 d. On shorter time-scales, the log-power-spectrum slope varies between 2 and 4, explaining the behaviour noted by previous studies. We recover and identify both the 5.6 and 67 d time-scales reported by previous work using the Green's function of the Continuous-time AutoRegressive Moving Average equation rather than by directly fitting the power spectrum of the light curve. These are the time-scales on which flux perturbations grow, and on which flux perturbations decay back to the steady-state flux level, respectively. We make the software package kālī used to study light curves using our method available to the community.
A wavelet analysis of scaling laws and long-memory in stock market volatility
NASA Astrophysics Data System (ADS)
Vuorenmaa, Tommi A.
2005-05-01
This paper studies the time-varying behavior of scaling laws and long-memory. This is motivated by the earlier finding that in the FX markets a single scaling factor might not always be sufficient across all relevant timescales: a different region may exist for intradaily time-scales and for larger time-scales. In specific, this paper investigates (i) if different scaling regions appear in stock market as well, (ii) if the scaling factor systematically differs from the Brownian, (iii) if the scaling factor is constant in time, and (iv) if the behavior can be explained by the heterogenuity of the players in the market and/or by intraday volatility periodicity. Wavelet method is used because it delivers a multiresolution decomposition and has excellent local adaptiviness properties. As a consequence, a wavelet-based OLS method allows for consistent estimation of long-memory. Thus issues (i)-(iv) shed light on the magnitude and behavior of a long-memory parameter, as well. The data are the 5-minute volatility series of Nokia Oyj at the Helsinki Stock Exchange around the burst of the IT-bubble. Period one represents the era of "irrational exuberance" and another the time after it. The results show that different scaling regions (i.e. multiscaling) may appear in the stock markets and not only in the FX markets, the scaling factor and the long-memory parameter are systematically different from the Brownian and they do not have to be constant in time, and that the behavior can be explained for a significant part by an intraday volatility periodicity called the New York effect. This effect was magnified by the frenzy trading of short-term speculators in the bubble period. The found stronger long-memory is also attributable to irrational exuberance.
OpenMP parallelization of a gridded SWAT (SWATG)
NASA Astrophysics Data System (ADS)
Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin
2017-12-01
Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Dynamic and Thermal Turbulent Time Scale Modelling for Homogeneous Shear Flows
NASA Technical Reports Server (NTRS)
Schwab, John R.; Lakshminarayana, Budugur
1994-01-01
A new turbulence model, based upon dynamic and thermal turbulent time scale transport equations, is developed and applied to homogeneous shear flows with constant velocity and temperature gradients. The new model comprises transport equations for k, the turbulent kinetic energy; tau, the dynamic time scale; k(sub theta), the fluctuating temperature variance; and tau(sub theta), the thermal time scale. It offers conceptually parallel modeling of the dynamic and thermal turbulence at the two equation level, and eliminates the customary prescription of an empirical turbulent Prandtl number, Pr(sub t), thus permitting a more generalized prediction capability for turbulent heat transfer in complex flows and geometries. The new model also incorporates constitutive relations, based upon invariant theory, that allow the effects of nonequilibrium to modify the primary coefficients for the turbulent shear stress and heat flux. Predictions of the new model, along with those from two other similar models, are compared with experimental data for decaying homogeneous dynamic and thermal turbulence, homogeneous turbulence with constant temperature gradient, and homogeneous turbulence with constant temperature gradient and constant velocity gradient. The new model offers improvement in agreement with the data for most cases considered in this work, although it was no better than the other models for several cases where all the models performed poorly.
Step scaling and the Yang-Mills gradient flow
NASA Astrophysics Data System (ADS)
Lüscher, Martin
2014-06-01
The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.
Charge carrier recombination dynamics in perovskite and polymer solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulke, Andreas; Kniepert, Juliane; Kurpiers, Jona
2016-03-14
Time-delayed collection field experiments are applied to planar organometal halide perovskite (CH{sub 3}NH{sub 3}PbI{sub 3}) based solar cells to investigate charge carrier recombination in a fully working solar cell at the nanosecond to microsecond time scale. Recombination of mobile (extractable) charges is shown to follow second-order recombination dynamics for all fluences and time scales tested. Most importantly, the bimolecular recombination coefficient is found to be time-dependent, with an initial value of ca. 10{sup −9} cm{sup 3}/s and a progressive reduction within the first tens of nanoseconds. Comparison to the prototypical organic bulk heterojunction device PTB7:PC{sub 71}BM yields important differences with regardmore » to the mechanism and time scale of free carrier recombination.« less
Molecular dynamics at low time resolution.
Faccioli, P
2010-10-28
The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.
NASA Astrophysics Data System (ADS)
Fitzgerald, Michael; Danaia, Lena; McKinnon, David H.
2017-07-01
In recent years, calls for the adoption of inquiry-based pedagogies in the science classroom have formed a part of the recommendations for large-scale high school science reforms. However, these pedagogies have been problematic to implement at scale. This research explores the perceptions of 34 positively inclined early-adopter teachers in relation to their implementation of inquiry-based pedagogies. The teachers were part of a large-scale Australian high school intervention project based around astronomy. In a series of semi-structured interviews, the teachers identified a number of common barriers that prevented them from implementing inquiry-based approaches. The most important barriers identified include the extreme time restrictions on all scales, the poverty of their common professional development experiences, their lack of good models and definitions for what inquiry-based teaching actually is, and the lack of good resources enabling the capacity for change. Implications for expectations of teachers and their professional learning during educational reform and curriculum change are discussed.
On the use of variability time-scales as an early classifier of radio transients and variables
NASA Astrophysics Data System (ADS)
Pietka, M.; Staley, T. D.; Pretorius, M. L.; Fender, R. P.
2017-11-01
We have shown previously that a broad correlation between the peak radio luminosity and the variability time-scales, approximately L ∝ τ5, exists for variable synchrotron emitting sources and that different classes of astrophysical sources occupy different regions of luminosity and time-scale space. Based on those results, we investigate whether the most basic information available for a newly discovered radio variable or transient - their rise and/or decline rate - can be used to set initial constraints on the class of events from which they originate. We have analysed a sample of ≈800 synchrotron flares, selected from light curves of ≈90 sources observed at 5-8 GHz, representing a wide range of astrophysical phenomena, from flare stars to supermassive black holes. Selection of outbursts from the noisy radio light curves has been done automatically in order to ensure reproducibility of results. The distribution of rise/decline rates for the selected flares is modelled as a Gaussian probability distribution for each class of object, and further convolved with estimated areal density of that class in order to correct for the strong bias in our sample. We show in this way that comparing the measured variability time-scale of a radio transient/variable of unknown origin can provide an early, albeit approximate, classification of the object, and could form part of a suite of measurements used to provide early categorization of such events. Finally, we also discuss the effect scintillating sources will have on our ability to classify events based on their variability time-scales.
Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.
Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E
2017-07-01
We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.
Biogeochemistry from Gliders at the Hawaii Ocean Times-Series
NASA Astrophysics Data System (ADS)
Nicholson, D. P.; Barone, B.; Karl, D. M.
2016-02-01
At the Hawaii Ocean Time-series (HOT) autonomous, underwater gliders equipped with biogeochemical sensors observe the oceans for months at a time, sampling spatiotemporal scales missed by the ship-based programs. Over the last decade, glider data augmented by a foundation of time-series observations have shed light on biogeochemical dynamics occuring spatially at meso- and submesoscales and temporally on scales from diel to annual. We present insights gained from the synergy between glider observations, time-series measurements and remote sensing in the subtropical North Pacific. We focus on diel variability observed in dissolved oxygen and bio-optics and approaches to autonomously quantify net community production and gross primary production (GPP) as developed during the 2012 Hawaii Ocean Experiment - DYnamics of Light And Nutrients (HOE-DYLAN). Glider-based GPP measurements were extended to explore the relationship between GPP and mesoscale context over multiple years of Seaglider deployments.
Cosmogenic radionuclides as a synchronisation tool - present status
NASA Astrophysics Data System (ADS)
Muscheler, Raimund; Adolphi, Florian; Mekhaldi, Florian; Mellström, Anette; Svensson, Anders; Aldahan, Ala; Possnert, Göran
2014-05-01
Changes in the flux of galactic cosmic rays into Earth's atmosphere produce variations in the production rates of cosmogenic radionuclides. The resulting globally synchronous signal in cosmogenic radionuclide records can be used to compare time scales and synchronise climate records. The most prominent example is the 14C wiggle match dating approach where variations in the atmospheric 14C concentration are used to match climate records and the tree-ring based part of the 14C calibration record. This approach can be extended to other cosmogenic radionuclide records such as 10Be time series provided that the different geochemical behaviour of 10Be and 14C is taken into account. Here we will present some recent results that illustrate the potential of using cosmogenic radionuclide records for comparing and synchronising different time scales. The focus will be on the last 50000 years where we will show examples how geomagnetic field, solar activity and unusual short-term cosmic ray changes can be used for comparing ice core, tree ring and sediment time scales. We will discuss some unexpected offsets between Greenland ice core and 14C time scale and we will examine how far back in time solar induced 10Be and 14C variations presently can be used to reliably synchronise ice core and 14C time scales.
NASA Astrophysics Data System (ADS)
Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang
2016-05-01
For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.
Jamroz, Michal; Orozco, Modesto; Kolinski, Andrzej; Kmiecik, Sebastian
2013-01-08
It is widely recognized that atomistic Molecular Dynamics (MD), a classical simulation method, captures the essential physics of protein dynamics. That idea is supported by a theoretical study showing that various MD force-fields provide a consensus picture of protein fluctuations in aqueous solution [Rueda, M. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 796-801]. However, atomistic MD cannot be applied to most biologically relevant processes due to its limitation to relatively short time scales. Much longer time scales can be accessed by properly designed coarse-grained models. We demonstrate that the aforementioned consensus view of protein dynamics from short (nanosecond) time scale MD simulations is fairly consistent with the dynamics of the coarse-grained protein model - the CABS model. The CABS model employs stochastic dynamics (a Monte Carlo method) and a knowledge-based force-field, which is not biased toward the native structure of a simulated protein. Since CABS-based dynamics allows for the simulation of entire folding (or multiple folding events) in a single run, integration of the CABS approach with all-atom MD promises a convenient (and computationally feasible) means for the long-time multiscale molecular modeling of protein systems with atomistic resolution.
Mouse Activity across Time Scales: Fractal Scenarios
Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.
2014-01-01
In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
NASA Astrophysics Data System (ADS)
Naritomi, Yusuke; Fuchigami, Sotaro
2011-02-01
Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.
Naritomi, Yusuke; Fuchigami, Sotaro
2011-02-14
Protein dynamics on a long time scale was investigated using all-atom molecular dynamics (MD) simulation and time-structure based independent component analysis (tICA). We selected the lysine-, arginine-, ornithine-binding protein (LAO) as a target protein and focused on its domain motions in the open state. A MD simulation of the LAO in explicit water was performed for 600 ns, in which slow and large-amplitude domain motions of the LAO were observed. After extracting domain motions by rigid-body domain analysis, the tICA was applied to the obtained rigid-body trajectory, yielding slow modes of the LAO's domain motions in order of decreasing time scale. The slowest mode detected by the tICA represented not a closure motion described by a largest-amplitude mode determined by the principal component analysis but a twist motion with a time scale of tens of nanoseconds. The slow dynamics of the LAO were well described by only the slowest mode and were characterized by transitions between two basins. The results show that tICA is promising for describing and analyzing slow dynamics of proteins.
A real-time KLT implementation for radio-SETI applications
NASA Astrophysics Data System (ADS)
Melis, Andrea; Concu, Raimondo; Pari, Pierpaolo; Maccone, Claudio; Montebugnoli, Stelio; Possenti, Andrea; Valente, Giuseppe; Antonietti, Nicoló; Perrodin, Delphine; Migoni, Carlo; Murgia, Matteo; Trois, Alessio; Barbaro, Massimo; Bocchinu, Alessandro; Casu, Silvia; Lunesu, Maria Ilaria; Monari, Jader; Navarrini, Alessandro; Pisanu, Tonino; Schilliró, Francesco; Vacca, Valentina
2016-07-01
SETI, the Search for ExtraTerrestrial Intelligence, is the search for radio signals emitted by alien civilizations living in the Galaxy. Narrow-band FFT-based approaches have been preferred in SETI, since their computation time only grows like N*lnN, where N is the number of time samples. On the contrary, a wide-band approach based on the Kahrunen-Lo`eve Transform (KLT) algorithm would be preferable, but it would scale like N*N. In this paper, we describe a hardware-software infrastructure based on FPGA boards and GPU-based PCs that circumvents this computation-time problem allowing for a real-time KLT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurujjaman, Md.; Narayanan, Ramesh; Iyengar, A. N. Sekar
2009-10-15
Continuous wavelet transform (CWT) based time-scale and multifractal analyses have been carried out on the anode glow related nonlinear floating potential fluctuations in a hollow cathode glow discharge plasma. CWT has been used to obtain the contour and ridge plots. Scale shift (or inversely frequency shift), which is a typical nonlinear behavior, has been detected from the undulating contours. From the ridge plots, we have identified the presence of nonlinearity and degree of chaoticity. Using the wavelet transform modulus maxima technique we have obtained the multifractal spectrum for the fluctuations at different discharge voltages and the spectrum was observed tomore » become a monofractal for periodic signals. These multifractal spectra were also used to estimate different quantities such as the correlation and fractal dimension, degree of multifractality, and complexity parameters. These estimations have been found to be consistent with the nonlinear time series analysis.« less
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
Equilibrium and out-of-equilibrium mechanics of living mammalian cytoplasm
NASA Astrophysics Data System (ADS)
Gupta, Satish Kumar; Guo, Ming
2017-10-01
Living cells are intrinsically non-equilibrium systems. They are driven out of equilibrium by the activity of the molecular motors and other enzymatic processes. This activity along with the ever present thermal agitation results in intracellular fluctuations inside the cytoplasm. In analogy to Brownian motion, the material property of the cytoplasm also influences the characteristics of these fluctuations. In this paper, through a combination of experimentation and theoretical analysis, we show that intracellular fluctuations are indeed due to non-thermal forces at relatively long time-scales, however, are dominated solely by thermal forces at relatively short time-scales. Thus, the cytoplasm of living mammalian cells behaves as an equilibrium material at short time-scales. The mean square displacement of these intracellular fluctuations scales inversely with the cytoplasmic shear modulus in this short time-scale equilibrium regime, and is inversely proportional to the square of the cytoplasmic shear modulus in the long time-scale out-of-equilibrium regime. Furthermore, we deploy passive microrheology based on these fluctuations to extract the mechanical property of the cytoplasm at the high-frequency regime. We show that the cytoplasm of living mammalian cells is a weak elastic gel in this regime; this is in an excellent agreement with an independent micromechanical measurement using optical tweezers.
The Change in Oceanic O2 Inventory Associated with Recent Global Warming
NASA Technical Reports Server (NTRS)
Keeling, Ralph; Garcia, Hernan
2002-01-01
Oceans general circulation models predict that global warming may cause a decrease in the oceanic O2 inventory and an associated O2 outgassing. An independent argument is presented here in support of this prediction based on observational evidence of the ocean's biogeochemical response to natural warming. On time scales from seasonal to centennial, natural O2 flux/heat flux ratios are shown to occur in a range of 2 to 10 nmol O2 per Joule of warming, with larger ratios typically occurring at higher latitudes and over longer time scales. The ratios are several times larger than would be expected solely from the effect of heating on the O2 solubility, indicating that most of the O2 exchange is biologically mediated through links between heating and stratification. The change in oceanic O2 inventory through the 1990's is estimated to be 0.3 - 0.4 x 10(exp 14) mol O2 per year based on scaling the observed anomalous long-term ocean warming by natural O2 flux/heating ratios and allowing for uncertainty due to decadal variability. Implications are discussed for carbon budgets based on observed changes in atmospheric O2/N2 ratio and based on observed changes in ocean dissolved inorganic carbon.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
Decoding the spatial signatures of multi-scale climate variability - a climate network perspective
NASA Astrophysics Data System (ADS)
Donner, R. V.; Jajcay, N.; Wiedermann, M.; Ekhtiari, N.; Palus, M.
2017-12-01
During the last years, the application of complex networks as a versatile tool for analyzing complex spatio-temporal data has gained increasing interest. Establishing this approach as a new paradigm in climatology has already provided valuable insights into key spatio-temporal climate variability patterns across scales, including novel perspectives on the dynamics of the El Nino Southern Oscillation or the emergence of extreme precipitation patterns in monsoonal regions. In this work, we report first attempts to employ network analysis for disentangling multi-scale climate variability. Specifically, we introduce the concept of scale-specific climate networks, which comprises a sequence of networks representing the statistical association structure between variations at distinct time scales. For this purpose, we consider global surface air temperature reanalysis data and subject the corresponding time series at each grid point to a complex-valued continuous wavelet transform. From this time-scale decomposition, we obtain three types of signals per grid point and scale - amplitude, phase and reconstructed signal, the statistical similarity of which is then represented by three complex networks associated with each scale. We provide a detailed analysis of the resulting connectivity patterns reflecting the spatial organization of climate variability at each chosen time-scale. Global network characteristics like transitivity or network entropy are shown to provide a new view on the (global average) relevance of different time scales in climate dynamics. Beyond expected trends originating from the increasing smoothness of fluctuations at longer scales, network-based statistics reveal different degrees of fragmentation of spatial co-variability patterns at different scales and zonal shifts among the key players of climate variability from tropically to extra-tropically dominated patterns when moving from inter-annual to decadal scales and beyond. The obtained results demonstrate the potential usefulness of systematically exploiting scale-specific climate networks, whose general patterns are in line with existing climatological knowledge, but provide vast opportunities for further quantifications at local, regional and global scales that are yet to be explored.
Wavelet analysis and scaling properties of time series
NASA Astrophysics Data System (ADS)
Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.
Large-amplitude jumps and non-Gaussian dynamics in highly concentrated hard sphere fluids.
Saltzman, Erica J; Schweizer, Kenneth S
2008-05-01
Our microscopic stochastic nonlinear Langevin equation theory of activated dynamics has been employed to study the real-space van Hove function of dense hard sphere fluids and suspensions. At very short times, the van Hove function is a narrow Gaussian. At sufficiently high volume fractions, such that the entropic barrier to relaxation is greater than the thermal energy, its functional form evolves with time to include a rapidly decaying component at small displacements and a long-range exponential tail. The "jump" or decay length scale associated with the tail increases with time (or particle root-mean-square displacement) at fixed volume fraction, and with volume fraction at the mean alpha relaxation time. The jump length at the alpha relaxation time is predicted to be proportional to a measure of the decoupling of self-diffusion and structural relaxation. At long times corresponding to mean displacements of order a particle diameter, the volume fraction dependence of the decay length disappears. A good superposition of the exponential tail feature based on the jump length as a scaling variable is predicted at high volume fractions. Overall, the theoretical results are in good accord with recent simulations and experiments. The basic aspects of the theory are also compared with a classic jump model and a dynamically facilitated continuous time random-walk model. Decoupling of the time scales of different parts of the relaxation process predicted by the theory is qualitatively similar to facilitated dynamics models based on the concept of persistence and exchange times if the elementary event is assumed to be associated with transport on a length scale significantly smaller than the particle size.
Multifractals embedded in short time series: An unbiased estimation of probability moment
NASA Astrophysics Data System (ADS)
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Networked high-speed auroral observations combined with radar measurements for multi-scale insights
NASA Astrophysics Data System (ADS)
Hirsch, M.; Semeter, J. L.
2015-12-01
Networks of ground-based instruments to study terrestrial aurora for the purpose of analyzing particle precipitation characteristics driving the aurora have been established. Additional funding is pouring into future ground-based auroral observation networks consisting of combinations of tossable, portable, and fixed installation ground-based legacy equipment. Our approach to this problem using the High Speed Tomography (HiST) system combines tightly-synchronized filtered auroral optical observations capturing temporal features of order 10 ms with supporting measurements from incoherent scatter radar (ISR). ISR provides a broader spatial context up to order 100 km laterally on one minute time scales, while our camera field of view (FOV) is chosen to be order 10 km at auroral altitudes in order to capture 100 m scale lateral auroral features. The dual-scale observations of ISR and HiST fine-scale optical observations may be coupled through a physical model using linear basis functions to estimate important ionospheric quantities such as electron number density in 3-D (time, perpendicular and parallel to the geomagnetic field).Field measurements and analysis using HiST and PFISR are presented from experiments conducted at the Poker Flat Research Range in central Alaska. Other multiscale configuration candidates include supplementing networks of all-sky cameras such as THEMIS with co-locations of HiST-like instruments to fuse wide FOV measurements with the fine-scale HiST precipitation characteristic estimates. Candidate models for this coupling include GLOW and TRANSCAR. Future extensions of this work may include incorporating line of sight total electron count estimates from ground-based networks of GPS receivers in a sensor fusion problem.
Paul, Lorna; Coulter, Elaine H; Miller, Linda; McFadyen, Angus; Dorfman, Joe; Mattison, Paul George G
2014-09-01
To explore the effectiveness and participant experience of web-based physiotherapy for people moderately affected with Multiple Sclerosis (MS) and to provide data to establish the sample size required for a fully powered, definitive randomized controlled study. A randomized controlled pilot study. Rehabilitation centre and participants' homes. Thirty community dwelling adults moderately affected by MS (Expanded Disability Status Scale 5-6.5). Twelve weeks of individualised web-based physiotherapy completed twice per week or usual care (control). Online exercise diaries were monitored; participants were telephoned weekly by the physiotherapist and exercise programmes altered remotely by the physiotherapist as required. The following outcomes were completed at baseline and after 12 weeks; 25 Foot Walk, Berg Balance Scale, Timed Up and Go, Multiple Sclerosis Impact Scale, Leeds MS Quality of Life Scale, MS-Related Symptom Checklist and Hospital Anxiety and Depression Scale. The intervention group also completed a website evaluation questionnaire and interviews. Participants reported that website was easy to use, convenient, and motivating and would be happy to use in the future. There was no statistically significant difference in the primary outcome measure, the timed 25ft walk in the intervention group (P=0.170), or other secondary outcome measures, except the Multiple Sclerosis Impact Scale (P=0.048). Effect sizes were generally small to moderate. People with MS were very positive about web-based physiotherapy. The results suggested that 80 participants, 40 in each group, would be sufficient for a fully powered, definitive randomized controlled trial. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
Engineering-Scale Demonstration of DuraLith and Ceramicrete Waste Forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Gary B.; Westsik, Joseph H.; Pires, Richard P.
2011-09-23
To support the selection of a waste form for the liquid secondary wastes from the Hanford Waste Immobilization and Treatment Plant, Washington River Protection Solutions (WRPS) has initiated secondary waste form testing on four candidate waste forms. Two of the candidate waste forms have not been developed to scale as the more mature waste forms. This work describes engineering-scale demonstrations conducted on Ceramicrete and DuraLith candidate waste forms. Both candidate waste forms were successfully demonstrated at an engineering scale. A preliminary conceptual design could be prepared for full-scale production of the candidate waste forms. However, both waste forms are stillmore » too immature to support a detailed design. Formulations for each candidate waste form need to be developed so that the material has a longer working time after mixing the liquid and solid constituents together. Formulations optimized based on previous lab studies did not have sufficient working time to support large-scale testing. The engineering-scale testing was successfully completed using modified formulations. Further lab development and parametric studies are needed to optimize formulations with adequate working time and assess the effects of changes in raw materials and process parameters on the final product performance. Studies on effects of mixing intensity on the initial set time of the waste forms are also needed.« less
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C. D.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2016-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
Visualizing and measuring flow in shale matrix using in situ synchrotron X-ray microtomography
NASA Astrophysics Data System (ADS)
Kohli, A. H.; Kiss, A. M.; Kovscek, A. R.; Bargar, J.
2017-12-01
Natural gas production via hydraulic fracturing of shale has proliferated on a global scale, yet recovery factors remain low because production strategies are not based on the physics of flow in shale reservoirs. In particular, the physical mechanisms and time scales of depletion from the matrix into the simulated fracture network are not well understood, limiting the potential to optimize operations and reduce environmental impacts. Studying matrix flow is challenging because shale is heterogeneous and has porosity from the μm- to nm-scale. Characterizing nm-scale flow paths requires electron microscopy but the limited field of view does not capture the connectivity and heterogeneity observed at the mm-scale. Therefore, pore-scale models must link to larger volumes to simulate flow on the reservoir-scale. Upscaled models must honor the physics of flow, but at present there is a gap between cm-scale experiments and μm-scale simulations based on ex situ image data. To address this gap, we developed a synchrotron X-ray microscope with an in situ cell to simultaneously visualize and measure flow. We perform coupled flow and microtomography experiments on mm-scale samples from the Barnett, Eagle Ford and Marcellus reservoirs. We measure permeability at various pressures via the pulse-decay method to quantify effective stress dependence and the relative contributions of advective and diffusive mechanisms. Images at each pressure step document how microfractures, interparticle pores, and organic matter change with effective stress. Linking changes in the pore network to flow measurements motivates a physical model for depletion. To directly visualize flow, we measure imbibition rates using inert, high atomic number gases and image periodically with monochromatic beam. By imaging above/below X-ray adsorption edges, we magnify the signal of gas saturation in μm-scale porosity and nm-scale, sub-voxel features. Comparing vacuumed and saturated states yields image-based measurements of the distribution and time scales of imbibition. We also characterize nm-scale structure via focused ion beam tomography to quantify sub-voxel porosity and connectivity. The multi-scale image and flow data is used to develop a framework to upscale and benchmark pore-scale models.
All-fibre photonic signal generator for attosecond timing and ultralow-noise microwave
Jung, Kwangyun; Kim, Jungwon
2015-01-01
High-impact frequency comb applications that are critically dependent on precise pulse timing (i.e., repetition rate) have recently emerged and include the synchronization of X-ray free-electron lasers, photonic analogue-to-digital conversion and photonic radar systems. These applications have used attosecond-level timing jitter of free-running mode-locked lasers on a fast time scale within ~100 μs. Maintaining attosecond-level absolute jitter over a significantly longer time scale can dramatically improve many high-precision comb applications. To date, ultrahigh quality-factor (Q) optical resonators have been used to achieve the highest-level repetition-rate stabilization of mode-locked lasers. However, ultrahigh-Q optical-resonator-based methods are often fragile, alignment sensitive and complex, which limits their widespread use. Here we demonstrate a fibre-delay line-based repetition-rate stabilization method that enables the all-fibre photonic generation of optical pulse trains with 980-as (20-fs) absolute r.m.s. timing jitter accumulated over 0.01 s (1 s). This simple approach is based on standard off-the-shelf fibre components and can therefore be readily used in various comb applications that require ultra-stable microwave frequency and attosecond optical timing. PMID:26531777
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1988-01-01
The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.
ERIC Educational Resources Information Center
Ling, Guangming; Rijmen, Frank
2011-01-01
The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…
NASA Astrophysics Data System (ADS)
Moradi, A.
2015-12-01
To properly model soil thermal performance in unsaturated porous media, for applications such as SBTES systems, knowledge of both soil hydraulic and thermal properties and how they change in space and time is needed. Knowledge obtained from pore scale to macroscopic scale studies can help us to better understand these systems and contribute to the state of knowledge which can then be translated to engineering applications in the field (i.e. implementation of SBTES systems at the field scale). One important thermal property that varies with soil water content, effective thermal conductivity, is oftentimes included in numerical models through the use of empirical relationships and simplified mathematical formulations developed based on experimental data obtained at either small laboratory or field scales. These models assume that there is local thermodynamic equilibrium between the air and water phases for a representative elementary volume. However, this assumption may not always be valid at the pore scale, thus questioning the validity of current modeling approaches. The purpose of this work is to evaluate the validity of the local thermodynamic equilibrium assumption as related to the effective thermal conductivity at pore scale. A numerical model based on the coupled Cahn-Hilliard and heat transfer equation was developed to solve for liquid flow and heat transfer through variably saturated porous media. In this model, the evolution of phases and the interfaces between phases are related to a functional form of the total free energy of the system. A unique solution for the system is obtained by solving the Navier-Stokes equation through free energy minimization. Preliminary results demonstrate that there is a correlation between soil temperature / degree of saturation and equivalent thermal conductivity / heat flux. Results also confirm the correlation between pressure differential magnitude and equilibrium time for multiphase flow to reach steady state conditions. Based on these results, the equivalent time for steady-state heat transfer is much larger than the equivalent time for steady-state multiphase flow for a given pressure differential. Moreover, the wetting phase flow and consequently heat transfer appear to be sensitive to contact angle and porosity of the domain.
Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture
NASA Astrophysics Data System (ADS)
Hassan, Ezeldin A.
Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ
Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less
Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo
2014-04-21
Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
Schlemm, Eckhard; Ebinger, Martin; Nolte, Christian H; Endres, Matthias; Schlemm, Ludwig
2017-08-01
Patients with acute ischemic stroke (AIS) and large vessel occlusion may benefit from direct transportation to an endovascular capable comprehensive stroke center (mothership approach) as opposed to direct transportation to the nearest stroke unit without endovascular therapy (drip and ship approach). The optimal transport strategy for patients with AIS and unknown vessel status is uncertain. The rapid arterial occlusion evaluation scale (RACE, scores ranging from 0 to 9, with higher scores indicating higher stroke severity) correlates with the National Institutes of Health Stroke Scale and was developed to identify patients with large vessel occlusion in a prehospital setting. We evaluate how the RACE scale can help to inform prehospital triage decisions for AIS patients. In a model-based approach, we estimate probabilities of good outcome (modified Rankin Scale score of ≤2 at 3 months) as a function of severity of stroke symptoms and transport times for the mothership approach and the drip and ship approach. We use these probabilities to obtain optimal RACE cutoff scores for different transfer time settings and combinations of treatment options (time-based eligibility for secondary transfer under the drip and ship approach, time-based eligibility for thrombolysis at the comprehensive stroke center under the mothership approach). In our model, patients with AIS are more likely to benefit from direct transportation to the comprehensive stroke center if they have more severe strokes. Values of the optimal RACE cutoff scores range from 0 (mothership for all patients) to >9 (drip and ship for all patients). Shorter transfer times and longer door-to-needle and needle-to-transfer (door out) times are associated with lower optimal RACE cutoff scores. Use of RACE cutoff scores that take into account transport times to triage AIS patients to the nearest appropriate hospital may lead to improved outcomes. Further studies should examine the feasibility of translation into clinical practice. © 2017 American Heart Association, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100
2015-01-15
In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei
2018-04-13
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.
NASA Astrophysics Data System (ADS)
Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris
2018-01-01
Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.
ERIC Educational Resources Information Center
Sánchez-Rosas, Javier; Furlan, Luis Alberto
2017-01-01
Based on the control-value theory of achievement emotions and theory of achievement goals, this research provides evidence of convergent, divergent, and criterion validity of the Spanish Cognitive Test Anxiety Scale (S-CTAS). A sample of Argentinean undergraduates responded to several scales administered at three points. At time 1 and 3, the…
Validity of Scores for a Developmental Writing Scale Based on Automated Scoring
ERIC Educational Resources Information Center
Attali, Yigal; Powers, Donald
2009-01-01
A developmental writing scale for timed essay-writing performance was created on the basis of automatically computed indicators of writing fluency, word choice, and conventions of standard written English. In a large-scale data collection effort that involved a national sample of more than 12,000 students from 4th, 6th, 8th, 10th, and 12th grade,…
ERIC Educational Resources Information Center
Ebesutani, Chad; Reise, Steven P.; Chorpita, Bruce F.; Ale, Chelsea; Regan, Jennifer; Young, John; Higa-McMillan, Charmaine; Weisz, John R.
2012-01-01
Using a school-based (N = 1,060) and clinic-referred (N = 303) youth sample, the authors developed a 25-item shortened version of the Revised Child Anxiety and Depression Scale (RCADS) using Schmid-Leiman exploratory bifactor analysis to reduce client burden and administration time and thus improve the transportability characteristics of this…
The adjusting factor method for weight-scaling truckloads of mixed hardwood sawlogs
Edward L. Adams
1976-01-01
A new method of weight-scaling truckloads of mixed hardwood sawlogs systematically adjusts for changes in the weight/volume ratio of logs coming into a sawmill. It uses a conversion factor based on the running average of weight/volume ratios of randomly selected sample loads. A test of the method indicated that over a period of time the weight-scaled volume should...
NASA Astrophysics Data System (ADS)
Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis
2017-08-01
The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.
Thermodynamics constrains allometric scaling of optimal development time in insects.
Dillon, Michael E; Frazier, Melanie R
2013-01-01
Development time is a critical life-history trait that has profound effects on organism fitness and on population growth rates. For ectotherms, development time is strongly influenced by temperature and is predicted to scale with body mass to the quarter power based on 1) the ontogenetic growth model of the metabolic theory of ecology which describes a bioenergetic balance between tissue maintenance and growth given the scaling relationship between metabolism and body size, and 2) numerous studies, primarily of vertebrate endotherms, that largely support this prediction. However, few studies have investigated the allometry of development time among invertebrates, including insects. Abundant data on development of diverse insects provides an ideal opportunity to better understand the scaling of development time in this ecologically and economically important group. Insects develop more quickly at warmer temperatures until reaching a minimum development time at some optimal temperature, after which development slows. We evaluated the allometry of insect development time by compiling estimates of minimum development time and optimal developmental temperature for 361 insect species from 16 orders with body mass varying over nearly 6 orders of magnitude. Allometric scaling exponents varied with the statistical approach: standardized major axis regression supported the predicted quarter-power scaling relationship, but ordinary and phylogenetic generalized least squares did not. Regardless of the statistical approach, body size alone explained less than 28% of the variation in development time. Models that also included optimal temperature explained over 50% of the variation in development time. Warm-adapted insects developed more quickly, regardless of body size, supporting the "hotter is better" hypothesis that posits that ectotherms have a limited ability to evolutionarily compensate for the depressing effects of low temperatures on rates of biological processes. The remaining unexplained variation in development time likely reflects additional ecological and evolutionary differences among insect species.
NASA Astrophysics Data System (ADS)
Li, Chang-Feng; Sureshkumar, Radhakrishna; Khomami, Bamin
2015-10-01
Self-consistent direct numerical simulations of turbulent channel flows of dilute polymer solutions exhibiting friction drag reduction (DR) show that an effective Deborah number defined as the ratio of polymer relaxation time to the time scale of fluctuations in the vorticity in the mean flow direction remains O (1) from the onset of DR to the maximum drag reduction (MDR) asymptote. However, the ratio of the convective time scale associated with streamwise vorticity fluctuations to the vortex rotation time decreases with increasing DR, and the maximum drag reduction asymptote is achieved when these two time scales become nearly equal. Based on these observations, a simple framework is proposed that adequately describes the influence of polymer additives on the extent of DR from the onset of DR to MDR as well as the universality of the MDR in wall-bounded turbulent flows with polymer additives.
Li, Chang-Feng; Sureshkumar, Radhakrishna; Khomami, Bamin
2015-10-01
Self-consistent direct numerical simulations of turbulent channel flows of dilute polymer solutions exhibiting friction drag reduction (DR) show that an effective Deborah number defined as the ratio of polymer relaxation time to the time scale of fluctuations in the vorticity in the mean flow direction remains O(1) from the onset of DR to the maximum drag reduction (MDR) asymptote. However, the ratio of the convective time scale associated with streamwise vorticity fluctuations to the vortex rotation time decreases with increasing DR, and the maximum drag reduction asymptote is achieved when these two time scales become nearly equal. Based on these observations, a simple framework is proposed that adequately describes the influence of polymer additives on the extent of DR from the onset of DR to MDR as well as the universality of the MDR in wall-bounded turbulent flows with polymer additives.
Spatial and Temporal scales of time-averaged 700 MB height anomalies
NASA Technical Reports Server (NTRS)
Gutzler, D.
1981-01-01
The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...
2016-10-20
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less
Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.
2016-01-01
Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187
Sybil--efficient constraint-based modelling in R.
Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J
2013-11-13
Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Boler, F. M.; Ertz, D. J.; Mencin, D.; Phillips, D.; Baker, S.
2017-12-01
UNAVCO, in its role as a NSF facility for geodetic infrastructure and data, has succeeded for over two decades using on-premises infrastructure, and while the promise of cloud-based infrastructure is well-established, significant questions about suitability of such infrastructure for facility-scale services remain. Primarily through the GeoSciCloud award from NSF EarthCube, UNAVCO is investigating the costs, advantages, and disadvantages of providing its geodetic data and services in the cloud versus using UNAVCO's on-premises infrastructure. (IRIS is a collaborator on the project and is performing its own suite of investigations). In contrast to the 2-3 year time scale for the research cycle, the time scale of operation and planning for NSF facilities is for a minimum of five years and for some services extends to a decade or more. Planning for on-premises infrastructure is deliberate, and migrations typically take months to years to fully implement. Migrations to a cloud environment can only go forward with similar deliberate planning and understanding of all costs and benefits. The EarthCube GeoSciCloud project is intended to address the uncertainties of facility-level operations in the cloud. Investigations are being performed in a commercial cloud environment (Amazon AWS) during the first year of the project and in a private cloud environment (NSF XSEDE resource at the Texas Advanced Computing Center) during the second year. These investigations are expected to illuminate the potential as well as the limitations of running facility scale production services in the cloud. The work includes running parallel equivalent cloud-based services to on premises services and includes: data serving via ftp from a large data store, operation of a metadata database, production scale processing of multiple months of geodetic data, web services delivery of quality checked data and products, large-scale compute services for event post-processing, and serving real time data from a network of 700-plus GPS stations. The evaluation is based on a suite of metrics that we have developed to elucidate the effectiveness of cloud-based services in price, performance, and management. Services are currently running in AWS and evaluation is underway.
NASA Astrophysics Data System (ADS)
Jeffreson, S. M. R.; Kruijssen, J. M. D.; Krumholz, M. R.; Longmore, S. N.
2018-05-01
We apply an analytic theory for environmentally-dependent molecular cloud lifetimes to the Central Molecular Zone of the Milky Way. Within this theory, the cloud lifetime in the Galactic centre is obtained by combining the time-scales for gravitational instability, galactic shear, epicyclic perturbations and cloud-cloud collisions. We find that at galactocentric radii ˜45-120 pc, corresponding to the location of the `100-pc stream', cloud evolution is primarily dominated by gravitational collapse, with median cloud lifetimes between 1.4 and 3.9 Myr. At all other galactocentric radii, galactic shear dominates the cloud lifecycle, and we predict that molecular clouds are dispersed on time-scales between 3 and 9 Myr, without a significant degree of star formation. Along the outer edge of the 100-pc stream, between radii of 100 and 120 pc, the time-scales for epicyclic perturbations and gravitational free-fall are similar. This similarity of time-scales lends support to the hypothesis that, depending on the orbital geometry and timing of the orbital phase, cloud collapse and star formation in the 100-pc stream may be triggered by a tidal compression at pericentre. Based on the derived time-scales, this should happen in approximately 20 per cent of all accretion events onto the 100-pc stream.
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.
2012-01-01
SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.
Backpropagation and ordered derivatives in the time scales calculus.
Seiffertt, John; Wunsch, Donald C
2010-08-01
Backpropagation is the most widely used neural network learning technique. It is based on the mathematical notion of an ordered derivative. In this paper, we present a formulation of ordered derivatives and the backpropagation training algorithm using the important emerging area of mathematics known as the time scales calculus. This calculus, with its potential for application to a wide variety of inter-disciplinary problems, is becoming a key area of mathematics. It is capable of unifying continuous and discrete analysis within one coherent theoretical framework. Using this calculus, we present here a generalization of backpropagation which is appropriate for cases beyond the specifically continuous or discrete. We develop a new multivariate chain rule of this calculus, define ordered derivatives on time scales, prove a key theorem about them, and derive the backpropagation weight update equations for a feedforward multilayer neural network architecture. By drawing together the time scales calculus and the area of neural network learning, we present the first connection of two major fields of research.
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
Multiple time step integrators in ab initio molecular dynamics.
Luehr, Nathan; Markland, Thomas E; Martínez, Todd J
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Multiscale analysis of the intensity fluctuation in a time series of dynamic speckle patterns.
Federico, Alejandro; Kaufmann, Guillermo H
2007-04-10
We propose the application of a method based on the discrete wavelet transform to detect, identify, and measure scaling behavior in dynamic speckle. The multiscale phenomena presented by a sample and displayed by its speckle activity are analyzed by processing the time series of dynamic speckle patterns. The scaling analysis is applied to the temporal fluctuation of the speckle intensity and also to the two derived data sets generated by its magnitude and sign. The application of the method is illustrated by analyzing paint-drying processes and bruising in apples. The results are discussed taking into account the different time organizations obtained for the scaling behavior of the magnitude and the sign of the intensity fluctuation.
NASA Astrophysics Data System (ADS)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Valeev, Edward F.; Neese, Frank
2016-01-01
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate previous implementation.
Scaling properties of foreign exchange volatility
NASA Astrophysics Data System (ADS)
Gençay, Ramazan; Selçuk, Faruk; Whitcher, Brandon
2001-01-01
In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete wavelet transformation. It is shown that foreign exchange rate volatilities follow different scaling laws at different horizons. Particularly, there is a smaller degree of persistence in intra-day volatility as compared to volatility at one day and higher scales. Therefore, a common practice in the risk management industry to convert risk measures calculated at shorter horizons into longer horizons through a global scaling parameter may not be appropriate. This paper also demonstrates that correlation between the foreign exchange volatilities is the lowest at the intra-day scales but exhibits a gradual increase up to a daily scale. The correlation coefficient stabilizes at scales one day and higher. Therefore, the benefit of currency diversification is the greatest at the intra-day scales and diminishes gradually at higher scales (lower frequencies). The wavelet cross-correlation analysis also indicates that the association between two volatilities is stronger at lower frequencies.
Monitoring forest dynamics with multi-scale and time series imagery.
Huang, Chunbo; Zhou, Zhixiang; Wang, Di; Dian, Yuanyong
2016-05-01
To learn the forest dynamics and evaluate the ecosystem services of forest effectively, a timely acquisition of spatial and quantitative information of forestland is very necessary. Here, a new method was proposed for mapping forest cover changes by combining multi-scale satellite remote-sensing imagery with time series data. Using time series Normalized Difference Vegetation Index products derived from the Moderate Resolution Imaging Spectroradiometer images (MODIS-NDVI) and Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images as data source, a hierarchy stepwise analysis from coarse scale to fine scale was developed for detecting the forest change area. At the coarse scale, MODIS-NDVI data with 1-km resolution were used to detect the changes in land cover types and a land cover change map was constructed using NDVI values at vegetation growing seasons. At the fine scale, based on the results at the coarse scale, Landsat TM/ETM+ data with 30-m resolution were used to precisely detect the forest change location and forest change trend by analyzing time series forest vegetation indices (IFZ). The method was tested using the data for Hubei Province, China. The MODIS-NDVI data from 2001 to 2012 were used to detect the land cover changes, and the overall accuracy was 94.02 % at the coarse scale. At the fine scale, the available TM/ETM+ images at vegetation growing seasons between 2001 and 2012 were used to locate and verify forest changes in the Three Gorges Reservoir Area, and the overall accuracy was 94.53 %. The accuracy of the two layer hierarchical monitoring results indicated that the multi-scale monitoring method is feasible and reliable.
ERIC Educational Resources Information Center
Stoet, Gijsbert
2017-01-01
This article reviews PsyToolkit, a free web-based service designed for setting up, running, and analyzing online questionnaires and reaction-time (RT) experiments. It comes with extensive documentation, videos, lessons, and libraries of free-to-use psychological scales and RT experiments. It provides an elaborate interactive environment to use (or…
USDA-ARS?s Scientific Manuscript database
Real-time rainfall accumulation estimates at the global scale is useful for many applications. However, the real-time versions of satellite-based rainfall products are known to contain errors relative to real rainfall observed in situ. Recent studies have demonstrated how information about rainfall ...
Multiscale functions, scale dynamics, and applications to partial differential equations
NASA Astrophysics Data System (ADS)
Cresson, Jacky; Pierret, Frédéric
2016-05-01
Modeling phenomena from experimental data always begins with a choice of hypothesis on the observed dynamics such as determinism, randomness, and differentiability. Depending on these choices, different behaviors can be observed. The natural question associated to the modeling problem is the following: "With a finite set of data concerning a phenomenon, can we recover its underlying nature? From this problem, we introduce in this paper the definition of multi-scale functions, scale calculus, and scale dynamics based on the time scale calculus [see Bohner, M. and Peterson, A., Dynamic Equations on Time Scales: An Introduction with Applications (Springer Science & Business Media, 2001)] which is used to introduce the notion of scale equations. These definitions will be illustrated on the multi-scale Okamoto's functions. Scale equations are analysed using scale regimes and the notion of asymptotic model for a scale equation under a particular scale regime. The introduced formalism explains why a single scale equation can produce distinct continuous models even if the equation is scale invariant. Typical examples of such equations are given by the scale Euler-Lagrange equation. We illustrate our results using the scale Newton's equation which gives rise to a non-linear diffusion equation or a non-linear Schrödinger equation as asymptotic continuous models depending on the particular fractional scale regime which is considered.
Palliative sedation: reliability and validity of sedation scales.
Arevalo, Jimmy J; Brinkkemper, Tijn; van der Heide, Agnes; Rietjens, Judith A; Ribbe, Miel; Deliens, Luc; Loer, Stephan A; Zuurmond, Wouter W A; Perez, Roberto S G M
2012-11-01
Observer-based sedation scales have been used to provide a measurable estimate of the comfort of nonalert patients in palliative sedation. However, their usefulness and appropriateness in this setting has not been demonstrated. To study the reliability and validity of observer-based sedation scales in palliative sedation. A prospective evaluation of 54 patients under intermittent or continuous sedation with four sedation scales was performed by 52 nurses. Included scales were the Minnesota Sedation Assessment Tool (MSAT), Richmond Agitation-Sedation Scale (RASS), Vancouver Interaction and Calmness Scale (VICS), and a sedation score proposed in the Guideline for Palliative Sedation of the Royal Dutch Medical Association (KNMG). Inter-rater reliability was tested with the intraclass correlation coefficient (ICC) and Cohen's kappa coefficient. Correlations between the scales using Spearman's rho tested concurrent validity. We also examined construct, discriminative, and evaluative validity. In addition, nurses completed a user-friendliness survey. Overall moderate to high inter-rater reliability was found for the VICS interaction subscale (ICC = 0.85), RASS (ICC = 0.73), and KNMG (ICC = 0.71). The largest correlation between scales was found for the RASS and KNMG (rho = 0.836). All scales showed discriminative and evaluative validity, except for the MSAT motor subscale and VICS calmness subscale. Finally, the RASS was less time consuming, clearer, and easier to use than the MSAT and VICS. The RASS and KNMG scales stand as the most reliable and valid among the evaluated scales. In addition, the RASS was less time consuming, clearer, and easier to use than the MSAT and VICS. Further research is needed to evaluate the impact of the scales on better symptom control and patient comfort. Copyright © 2012 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Decadal-Scale Forecasting of Climate Drivers for Marine Applications.
Salinger, J; Hobday, A J; Matear, R J; O'Kane, T J; Risbey, J S; Dunstan, P; Eveson, J P; Fulton, E A; Feng, M; Plagányi, É E; Poloczanska, E S; Marshall, A G; Thompson, P A
Climate influences marine ecosystems on a range of time scales, from weather-scale (days) through to climate-scale (hundreds of years). Understanding of interannual to decadal climate variability and impacts on marine industries has received less attention. Predictability up to 10 years ahead may come from large-scale climate modes in the ocean that can persist over these time scales. In Australia the key drivers of climate variability affecting the marine environment are the Southern Annular Mode, the Indian Ocean Dipole, the El Niño/Southern Oscillation, and the Interdecadal Pacific Oscillation, each has phases that are associated with different ocean circulation patterns and regional environmental variables. The roles of these drivers are illustrated with three case studies of extreme events-a marine heatwave in Western Australia, a coral bleaching of the Great Barrier Reef, and flooding in Queensland. Statistical and dynamical approaches are described to generate forecasts of climate drivers that can subsequently be translated to useful information for marine end users making decisions at these time scales. Considerable investment is still needed to support decadal forecasting including improvement of ocean-atmosphere models, enhancement of observing systems on all scales to support initiation of forecasting models, collection of important biological data, and integration of forecasts into decision support tools. Collaboration between forecast developers and marine resource sectors-fisheries, aquaculture, tourism, biodiversity management, infrastructure-is needed to support forecast-based tactical and strategic decisions that reduce environmental risk over annual to decadal time scales. © 2016 Elsevier Ltd. All rights reserved.
Scale-down/scale-up studies leading to improved commercial beer fermentation.
Nienow, Alvin W; Nordkvist, Mikkel; Boulton, Christopher A
2011-08-01
Scale-up/scale-down techniques are vital for successful and safe commercial-scale bioprocess design and operation. An example is given in this review of recent studies related to beer production. Work at the bench scale shows that brewing yeast is not compromised by mechanical agitation up to 4.5 W/kg; and that compared with fermentations mixed by CO(2) evolution, agitation ≥ 0.04 W/kg is able to reduce fermentation time by about 20%. Work at the commercial scale in cylindroconical fermenters shows that, without mechanical agitation, most of the yeast sediments into the cone for about 50% of the fermentation time, leading to poor temperature control. Stirrer mixing overcomes these problems and leads to a similar reduction in batch time as the bench-scale tests and greatly reduces its variability, but is difficult to install in extant fermenters. The mixing characteristics of a new jet mixer, a rotary jet mixer, which overcomes these difficulties, are reported, based on pilot-scale studies. This change enables the advantages of stirring to be achieved at the commercial scale without the problems. In addition, more of the fermentable sugars are converted into ethanol. This review shows the effectiveness of scale-up/scale-down studies for improving commercial operations. Suggestions for further studies are made: one concerning the impact of homogenization on the removal of vicinal diketones and the other on the location of bubble formation at the commercial scale. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Tatner, Mary; Tierney, Anne
2016-01-01
The development and evaluation of a two-week laboratory class, based on the diagnosis of human infectious diseases, is described. It can be easily scaled up or down, to suit class sizes from 50 to 600 and completed in a shorter time scale, and to different audiences as desired. Students employ a range of techniques to solve a real-life and…
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
A Multi-Scale Energy Food Systems Modeling Framework For Climate Adaptation
NASA Astrophysics Data System (ADS)
Siddiqui, S.; Bakker, C.; Zaitchik, B. F.; Hobbs, B. F.; Broaddus, E.; Neff, R.; Haskett, J.; Parker, C.
2016-12-01
Our goal is to understand coupled system dynamics across scales in a manner that allows us to quantify the sensitivity of critical human outcomes (nutritional satisfaction, household economic well-being) to development strategies and to climate or market induced shocks in sub-Saharan Africa. We adopt both bottom-up and top-down multi-scale modeling approaches focusing our efforts on food, energy, water (FEW) dynamics to define, parameterize, and evaluate modeled processes nationally as well as across climate zones and communities. Our framework comprises three complementary modeling techniques spanning local, sub-national and national scales to capture interdependencies between sectors, across time scales, and on multiple levels of geographic aggregation. At the center is a multi-player micro-economic (MME) partial equilibrium model for the production, consumption, storage, and transportation of food, energy, and fuels, which is the focus of this presentation. We show why such models can be very useful for linking and integrating across time and spatial scales, as well as a wide variety of models including an agent-based model applied to rural villages and larger population centers, an optimization-based electricity infrastructure model at a regional scale, and a computable general equilibrium model, which is applied to understand FEW resources and economic patterns at national scale. The MME is based on aggregating individual optimization problems for relevant players in an energy, electricity, or food market and captures important food supply chain components of trade and food distribution accounting for infrastructure and geography. Second, our model considers food access and utilization by modeling food waste and disaggregating consumption by income and age. Third, the model is set up to evaluate the effects of seasonality and system shocks on supply, demand, infrastructure, and transportation in both energy and food.
Quantum Metrology beyond the Classical Limit under the Effect of Dephasing
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.
2018-04-01
Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.
USDA-ARS?s Scientific Manuscript database
The performance of wood-based denitrifying bioreactors to treat high-nitrate wastewaters from aquaculture systems has not previously been demonstrated. Four pilot-scale woodchip bioreactors (approximately 1:10 scale) were constructed and operated for 268 d to determine the optimal range of design hy...
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Individual-based approach to epidemic processes on arbitrary dynamic contact networks
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki
2016-08-01
The dynamics of contact networks and epidemics of infectious diseases often occur on comparable time scales. Ignoring one of these time scales may provide an incomplete understanding of the population dynamics of the infection process. We develop an individual-based approximation for the susceptible-infected-recovered epidemic model applicable to arbitrary dynamic networks. Our framework provides, at the individual-level, the probability flow over time associated with the infection dynamics. This computationally efficient framework discards the correlation between the states of different nodes, yet provides accurate results in approximating direct numerical simulations. It naturally captures the temporal heterogeneities and correlations of contact sequences, fundamental ingredients regulating the timing and size of an epidemic outbreak, and the number of secondary infections. The high accuracy of our approximation further allows us to detect the index individual of an epidemic outbreak in real-life network data.
2D IR spectra of cyanide in water investigated by molecular dynamics simulations
Lee, Myung Won; Carr, Joshua K.; Göllner, Michael; Hamm, Peter; Meuwly, Markus
2013-01-01
Using classical molecular dynamics simulations, the 2D infrared (IR) spectroscopy of CN− solvated in D2O is investigated. Depending on the force field parametrizations, most of which are based on multipolar interactions for the CN− molecule, the frequency-frequency correlation function and observables computed from it differ. Most notably, models based on multipoles for CN− and TIP3P for water yield quantitatively correct results when compared with experiments. Furthermore, the recent finding that T 1 times are sensitive to the van der Waals ranges on the CN− is confirmed in the present study. For the linear IR spectrum, the best model reproduces the full widths at half maximum almost quantitatively (13.0 cm−1 vs. 14.9 cm−1) if the rotational contribution to the linewidth is included. Without the rotational contribution, the lines are too narrow by about a factor of two, which agrees with Raman and IR experiments. The computed and experimental tilt angles (or nodal slopes) α as a function of the 2D IR waiting time compare favorably with the measured ones and the frequency fluctuation correlation function is invariably found to contain three time scales: a sub-ps, 1 ps, and one on the 10-ps time scale. These time scales are discussed in terms of the structural dynamics of the surrounding solvent and it is found that the longest time scale (≈10 ps) most likely corresponds to solvent exchange between the first and second solvation shell, in agreement with interpretations from nuclear magnetic resonance measurements.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Verma, Aman; Mahesh, Krishnan
2012-08-01
The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.
Multiscale skeletal representation of images via Voronoi diagrams
NASA Astrophysics Data System (ADS)
Marston, R. E.; Shih, Jian C.
1995-08-01
Polygonal approximations to skeletal or stroke-based representations of 2D objects may consume less storage and be sufficient to describe their shape for many applications. Multi- scale descriptions of object outlines are well established but corresponding methods for skeletal descriptions have been slower to develop. In this paper we offer a method of generating scale-based skeletal representation via the Voronoi diagram. The method has the advantages of less time complexity, a closer relationship between the skeletons at each scale and better control over simplification of the skeleton at lower scales. This is because the algorithm starts by generating the skeleton at the coarsest scale first, then it produces each finer scale, in an iterative manner, directly from the level below. The skeletal approximations produced by the algorithm also benefit from a strong relationship with the object outline, due to the structure of the Voronoi diagram.
Vernier effect-based multiplication of the Sagnac beating frequency in ring laser gyroscope sensors
NASA Astrophysics Data System (ADS)
Adib, George A.; Sabry, Yasser M.; Khalil, Diaa
2018-02-01
A multiplication method of the Sagnac effect scale factor in ring laser gyroscopes is presented based on the Vernier effect of a dual-coupler passive ring resonator coupled to the active ring. The multiplication occurs when the two rings have comparable lengths or integer multiples and their scale factors have opposite signs. In this case, and when the rings have similar areas, the scale factor is multiplied by ratio of their length to their length difference. The scale factor of the presented configuration is derived analytically and the lock-in effect is analyzed. The principle is demonstrated using optical fiber rings and semiconductor optical amplifier as gain medium. A scale factor multiplication by about 175 is experimentally measured, demonstrating larger than two orders of magnitude enhancement in the Sagnac effect scale factor for the first time in literature, up to the authors' knowledge.
Adaptive learning compressive tracking based on Markov location prediction
NASA Astrophysics Data System (ADS)
Zhou, Xingyu; Fu, Dongmei; Yang, Tao; Shi, Yanan
2017-03-01
Object tracking is an interdisciplinary research topic in image processing, pattern recognition, and computer vision which has theoretical and practical application value in video surveillance, virtual reality, and automatic navigation. Compressive tracking (CT) has many advantages, such as efficiency and accuracy. However, when there are object occlusion, abrupt motion and blur, similar objects, and scale changing, the CT has the problem of tracking drift. We propose the Markov object location prediction to get the initial position of the object. Then CT is used to locate the object accurately, and the classifier parameter adaptive updating strategy is given based on the confidence map. At the same time according to the object location, extract the scale features, which is able to deal with object scale variations effectively. Experimental results show that the proposed algorithm has better tracking accuracy and robustness than current advanced algorithms and achieves real-time performance.
ERIC Educational Resources Information Center
Kesici, Ahmet; Tunç, Nazenin Fidan
2018-01-01
This study was carried out to develop a scale for determining the level of digital addiction of the youth. In this study carried out with a group of 687 students from Siirt, Dicle and Erzincan Universities, a draft scale of 28 items based on the interviews with two students who spent a long time with digital tools and their friends, and on the…
Rotational relaxation time as unifying time scale for polymer and fiber drag reduction
NASA Astrophysics Data System (ADS)
Boelens, A. M. P.; Muthukumar, M.
2016-05-01
Using hybrid direct numerical simulation plus Langevin dynamics, a comparison is performed between polymer and fiber stress tensors in turbulent flow. The stress tensors are found to be similar, suggesting a common drag reducing mechanism in the onset regime for both flexible polymers and rigid fibers. Since fibers do not have an elastic backbone, this must be a viscous effect. Analysis of the viscosity tensor reveals that all terms are negligible, except the off-diagonal shear viscosity associated with rotation. Based on this analysis, we identify the rotational orientation time as the unifying time scale setting a new time criterion for drag reduction by both flexible polymers and rigid fibers.
Rotational relaxation time as unifying time scale for polymer and fiber drag reduction.
Boelens, A M P; Muthukumar, M
2016-05-01
Using hybrid direct numerical simulation plus Langevin dynamics, a comparison is performed between polymer and fiber stress tensors in turbulent flow. The stress tensors are found to be similar, suggesting a common drag reducing mechanism in the onset regime for both flexible polymers and rigid fibers. Since fibers do not have an elastic backbone, this must be a viscous effect. Analysis of the viscosity tensor reveals that all terms are negligible, except the off-diagonal shear viscosity associated with rotation. Based on this analysis, we identify the rotational orientation time as the unifying time scale setting a new time criterion for drag reduction by both flexible polymers and rigid fibers.
Scale dependant compensational stacking of channelized sedimentary deposits
NASA Astrophysics Data System (ADS)
Wang, Y.; Straub, K. M.; Hajek, E. A.
2010-12-01
Compensational stacking, the tendency for sediment transport system to preferentially fill topographic lows, thus smoothing out topographic relief is a concept used in the interpretation of the stratigraphic record. Recently, a metric was developed to quantify the strength of compensation in sedimentary basins by comparing observed stacking patterns to what would be expected from simple, uncorrelated stacking. This method uses the rate of decay of spatial variability in sedimentation between picked depositional horizons with increasing vertical stratigraphic averaging distance. We explore how this metric varies as a function of stratigraphic scale using data from physical experiments, stratigraphy exposed in outcrops and numerical models. In an experiment conducted at Tulane University’s Sediment Dynamics Laboratory, the topography of a channelized delta formed by weakly cohesive sediment was monitored along flow-perpendicular transects at a high temporal resolution relative to channel kinematics. Over the course of this experiment a uniform relative subsidence pattern, designed to isolate autogenic processes, resulted in the construction of a stratigraphic package that is 25 times as thick as the depth of the experimental channels. We observe a scale-dependence on the compensational stacking of deposits set by the system’s avulsion time-scale. Above the avulsion time-scale deposits stack purely compensationally, but below this time-scale deposits stack somewhere between randomly and deterministically. The well-exposed Ferris Formation (Cretaceous/Paleogene, Hanna Basin, Wyoming, USA) also shows scale-dependant stratigraphic organization which appears to be set by an avulsion time-scale. Finally, we utilize simple object-based models to illustrate how channel avulsions influence compensation in alluvial basins.
Fan-out Estimation in Spin-based Quantum Computer Scale-up.
Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R
2017-10-17
Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.
Lagrangian Statistics and Intermittency in Gulf of Mexico.
Lin, Liru; Zhuang, Wei; Huang, Yongxiang
2017-12-12
Due to the nonlinear interaction between different flow patterns, for instance, ocean current, meso-scale eddies, waves, etc, the movement of ocean is extremely complex, where a multiscale statistics is then relevant. In this work, a high time-resolution velocity with a time step 15 minutes obtained by the Lagrangian drifter deployed in the Gulf of Mexico (GoM) from July 2012 to October 2012 is considered. The measured Lagrangian velocity correlation function shows a strong daily cycle due to the diurnal tidal cycle. The estimated Fourier power spectrum E(f) implies a dual-power-law behavior which is separated by the daily cycle. The corresponding scaling exponents are close to -1.75 and -2.75 respectively for the time scale larger (resp. 0.1 ≤ f ≤ 0.4 day -1 ) and smaller (resp. 2 ≤ f ≤ 8 day -1 ) than 1 day. A Hilbert-based approach is then applied to this data set to identify the possible multifractal property of the cascade process. The results show an intermittent dynamics for the time scale larger than 1 day, while a less intermittent dynamics for the time scale smaller than 1 day. It is speculated that the energy is partially injected via the diurnal tidal movement and then transferred to larger and small scales through a complex cascade process, which needs more studies in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano
Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralizedmore » and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. The proposed controllers enable an update of the power outputs at a time scale that is compatible with the underlying dynamics of loads and ambient conditions, and continuously drive the system operation towards OPF-based solutions.« less
Scaling of coupled dilatancy-diffusion processes in space and time
NASA Astrophysics Data System (ADS)
Main, I. G.; Bell, A. F.; Meredith, P. G.; Brantut, N.; Heap, M.
2012-04-01
Coupled dilatancy-diffusion processes resulting from microscopically brittle damage due to precursory cracking have been observed in the laboratory and suggested as a mechanism for earthquake precursors. One reason precursors have proven elusive may be the scaling in space: recent geodetic and seismic data placing strong limits on the spatial extent of the nucleation zone for recent earthquakes. Another may be the scaling in time: recent laboratory results on axi-symmetric samples show both a systematic decrease in circumferential extensional strain at failure and a delayed and a sharper acceleration of acoustic emission event rate as strain rate is decreased. Here we examine the scaling of such processes in time from laboratory to field conditions using brittle creep (constant stress loading) to failure tests, in an attempt to bridge part of the strain rate gap to natural conditions, and discuss the implications for forecasting the failure time. Dilatancy rate is strongly correlated to strain rate, and decreases to zero in the steady-rate creep phase at strain rates around 10-9 s-1 for a basalt from Mount Etna. The data are well described by a creep model based on the linear superposition of transient (decelerating) and accelerating micro-crack growth due to stress corrosion. The model produces good fits to the failure time in retrospect using the accelerating acoustic emission event rate, but in prospective tests on synthetic data with the same properties we find failure-time forecasting is subject to systematic epistemic and aleatory uncertainties that degrade predictability. The next stage is to use the technology developed to attempt failure forecasting in real time, using live streamed data and a public web-based portal to quantify the prospective forecast quality under such controlled laboratory conditions.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
NASA Astrophysics Data System (ADS)
Beach, A. L., III; Early, A. B.; Chen, G.; Parker, L.
2014-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, which are characterized by a wide range of trace gases and aerosol properties. The airborne observational data have often been used in assessment and validation of models and satellite instruments. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. Given the sheer volume of data variables across field campaigns and instruments reporting data on different time scales, this data is often difficult and time-intensive for researchers to analyze. The TAD web application is designed to provide an intuitive user interface (UI) to facilitate quick and efficient discovery from a vast number of airborne variables and data. Users are given the option to search based on high-level parameter groups, individual common names, mission and platform, as well as date ranges. Experienced users can immediately filter by keyword using the global search option. Once the user has chosen their required variables, they are given the option to either request PI data files based on their search criteria or create merged data, i.e. geo-located data from one or more measurement PIs. The purpose of the merged data feature is to allow users to compare data from one flight, as not all data from each flight is taken on the same time scale. Time bases can be continuous or based on the time base from one of the measurement time scales and intervals. After an order is submitted and processed, an ASDC email is sent to the user with a link for data download. The TAD user interface design, application architecture, and proposed future enhancements will be presented.
NASA Astrophysics Data System (ADS)
Martin, Stephanie L.-O.; Carek, Andrew M.; Kim, Chang-Sei; Ashouri, Hazar; Inan, Omer T.; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2016-12-01
Pulse transit time (PTT) is being widely pursued for cuff-less blood pressure (BP) monitoring. Most efforts have employed the time delay between ECG and finger photoplethysmography (PPG) waveforms as a convenient surrogate of PTT. However, these conventional pulse arrival time (PAT) measurements include the pre-ejection period (PEP) and the time delay through small, muscular arteries and may thus be an unreliable marker of BP. We assessed a bathroom weighing scale-like system for convenient measurement of ballistocardiography and foot PPG waveforms - and thus PTT through larger, more elastic arteries - in terms of its ability to improve tracking of BP in individual subjects. We measured “scale PTT”, conventional PAT, and cuff BP in humans during interventions that increased BP but changed PEP and smooth muscle contraction differently. Scale PTT tracked the diastolic BP changes well, with correlation coefficient of -0.80 ± 0.02 (mean ± SE) and root-mean-squared-error of 7.6 ± 0.5 mmHg after a best-case calibration. Conventional PAT was significantly inferior in tracking these changes, with correlation coefficient of -0.60 ± 0.04 and root-mean-squared-error of 14.6 ± 1.5 mmHg (p < 0.05). Scale PTT also tracked the systolic BP changes better than conventional PAT but not to an acceptable level. With further development, scale PTT may permit reliable, convenient measurement of BP.
Martin, Stephanie L-O; Carek, Andrew M; Kim, Chang-Sei; Ashouri, Hazar; Inan, Omer T; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2016-12-15
Pulse transit time (PTT) is being widely pursued for cuff-less blood pressure (BP) monitoring. Most efforts have employed the time delay between ECG and finger photoplethysmography (PPG) waveforms as a convenient surrogate of PTT. However, these conventional pulse arrival time (PAT) measurements include the pre-ejection period (PEP) and the time delay through small, muscular arteries and may thus be an unreliable marker of BP. We assessed a bathroom weighing scale-like system for convenient measurement of ballistocardiography and foot PPG waveforms - and thus PTT through larger, more elastic arteries - in terms of its ability to improve tracking of BP in individual subjects. We measured "scale PTT", conventional PAT, and cuff BP in humans during interventions that increased BP but changed PEP and smooth muscle contraction differently. Scale PTT tracked the diastolic BP changes well, with correlation coefficient of -0.80 ± 0.02 (mean ± SE) and root-mean-squared-error of 7.6 ± 0.5 mmHg after a best-case calibration. Conventional PAT was significantly inferior in tracking these changes, with correlation coefficient of -0.60 ± 0.04 and root-mean-squared-error of 14.6 ± 1.5 mmHg (p < 0.05). Scale PTT also tracked the systolic BP changes better than conventional PAT but not to an acceptable level. With further development, scale PTT may permit reliable, convenient measurement of BP.
Deciphering hierarchical features in the energy landscape of adenylate kinase folding/unfolding
NASA Astrophysics Data System (ADS)
Taylor, J. Nicholas; Pirchi, Menahem; Haran, Gilad; Komatsuzaki, Tamiki
2018-03-01
Hierarchical features of the energy landscape of the folding/unfolding behavior of adenylate kinase, including its dependence on denaturant concentration, are elucidated in terms of single-molecule fluorescence resonance energy transfer (smFRET) measurements in which the proteins are encapsulated in a lipid vesicle. The core in constructing the energy landscape from single-molecule time-series across different denaturant concentrations is the application of rate-distortion theory (RDT), which naturally considers the effects of measurement noise and sampling error, in combination with change-point detection and the quantification of the FRET efficiency-dependent photobleaching behavior. Energy landscapes are constructed as a function of observation time scale, revealing multiple partially folded conformations at small time scales that are situated in a superbasin. As the time scale increases, these denatured states merge into a single basin, demonstrating the coarse-graining of the energy landscape as observation time increases. Because the photobleaching time scale is dependent on the conformational state of the protein, possible nonequilibrium features are discussed, and a statistical test for violation of the detailed balance condition is developed based on the state sequences arising from the RDT framework.
NASA Astrophysics Data System (ADS)
Patel, Ravi A.; Perko, Janez; Jacques, Diederik
2017-04-01
Often, especially in the disciplines related to natural porous media, such as for example vadoze zone or aquifer hydrology or contaminant transport, the relevant spatial and temporal scales on which we need to provide information is larger than the scale where the processes actually occur. Usual techniques used to deal with these problems assume the existence of a REV. However, in order to understand the behavior on larger scales it is important to downscale the problem onto the relevant scale of the processes. Due to the limitations of resources (time, memory) the downscaling can only be made up to the certain lower scale. At this lower scale still several scales may co-exist - the scale which can be explicitly described and a scale which needs to be conceptualized by effective properties. Hence, models which are supposed to provide effective properties on relevant scales should therefor be flexible enough to represent complex pore-structure by explicit geometry on one side, and differently defined processes (e.g. by the effective properties) which emerge on lower scale. In this work we present the state-of-the-art lattice Boltzmann method based simulation tool applicable to advection-diffusion equation coupled to geochemical processes. The lattice Boltzmann transport solver can be coupled with an external geochemical solver which allows to account for a wide range of geochemical reaction networks through thermodynamic databases. The applicability to multiphase systems is ongoing. We provide several examples related to the calculation of an effective diffusion properties, permeability and effective reaction rate based on a continuum scale based on the pore scale geometry.
Multi-Center Traffic Management Advisor Operational Field Test Results
NASA Technical Reports Server (NTRS)
Farley, Todd; Landry, Steven J.; Hoang, Ty; Nickelson, Monicarol; Levin, Kerry M.; Rowe, Dennis W.
2005-01-01
The Multi-Center Traffic Management Advisor (McTMA) is a research prototype system which seeks to bring time-based metering into the mainstream of air traffic control (ATC) operations. Time-based metering is an efficient alternative to traditional air traffic management techniques such as distance-based spacing (miles-in-trail spacing) and managed arrival reservoirs (airborne holding). While time-based metering has demonstrated significant benefit in terms of arrival throughput and arrival delay, its use to date has been limited to arrival operations at just nine airports nationally. Wide-scale adoption of time-based metering has been hampered, in part, by the limited scalability of metering automation. In order to realize the full spectrum of efficiency benefits possible with time-based metering, a much more modular, scalable time-based metering capability is required. With its distributed metering architecture, multi-center TMA offers such a capability.
Temporal scaling of hydraulic head and river base flow and its implication for groundwater recharge
Zhang, You‐Kuan; Schilling, Keith
2004-01-01
Spectral analyses were conducted for hourly hydraulic head (h) data observed over a 4‐year period at seven monitoring wells in the Walnut Creek watershed, Iowa. The log power spectral density of the hydraulic head fluctuations versus log frequency (f) at all seven wells is shown to have a distinct slope or fractal dimension (D), indicating temporal scaling in the time series of water level fluctuations. The fractal dimension of the time series varies from well to well, and the spectrum for the average h over all seven wells has a fractal dimension of 1.46 and Hurst coefficient of 0.54. The log power spectral density of estimated base flow in the Walnut Creek and four other watersheds versus log f is shown to have two distinct slopes with a break in scaling at about 30 days. It is shown that the groundwater recharge process in a basin can be estimated from a head spectrum based on existing theoretical results. Hydraulic head in an aquifer may fluctuate as a fractal in time in response to either a white noise or fractal recharge process, depending on physical parameters (i.e., transmissivity and specific yield) of the aquifer. The recharge process at the Walnut Creek watershed is shown to have a white noise spectrum based on the observed head spectrum.
Unsteady separation and vortex shedding from a laminar separation bubble over a bluff body
NASA Astrophysics Data System (ADS)
Das, S. P.; Srinivasan, U.; Arakeri, J. H.
2013-07-01
Boundary layers are subject to favorable and adverse pressure gradients because of both the temporal and spatial components of the pressure gradient. The adverse pressure gradient may cause the flow to separate. In a closed loop unsteady tunnel we have studied the initiation of separation in unsteady flow past a constriction (bluff body) in a channel. We have proposed two important scalings for the time when boundary layer separates. One is based on the local pressure gradient and the other is a convective time scale based on boundary layer parameters. The flow visualization using a dye injection technique shows the flow structure past the body. Nondimensional shedding frequency (Strouhal number) is calculated based on boundary layer and momentum thicknesses. Strouhal number based on the momentum thickness shows a close agreement with that for flat plate and circular cylinder.
Wang, Shi-Fan; Guo, Chao-Lun; Cui, Ke-Ke; Zhu, Yan-Ting; Ding, Jun-Xiong; Zou, Xin-Yue; Li, Yi-Hang
2015-09-01
Lactic acid has been used as a bio-based green solvent to study the ultrasound-assisted scale-up synthesis. We report here, for the first time, on the novel and scalable process for synthesis of pyrrole derivatives in lactic acid solvent under ultrasonic radiation. Eighteen pyrrole derivatives have been synthesized in lactic acid solvent under ultrasonic radiation and characterized by (1)H NMR, IR, ESI MS. The results show, under ultrasonic radiation, lactic acid solvent can overcome the scale-up challenges and exhibited many advantages, such as bio-based origin, shorter reaction time, lower volatility, higher yields, and ease of isolating the products. Copyright © 2015 Elsevier B.V. All rights reserved.
Harold A. Rapraeger
1952-01-01
In the Pacific Northwest logs are often scaled in lengths which average about 32 feet to facilitate logging. Although several excellent Western hemlock, Sitka spruce and Douglas-fir volume tables based on a 32-foot scaling length have been available for some time, they provide for a larger top diameter than is now used in actual practice. Other tables specify a...
Kussmann, Jörg; Ochsenfeld, Christian
2007-11-28
A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.
Vergara, Pablo M.; Soto, Gerardo E.; Rodewald, Amanda D.; Meneses, Luis O.; Pérez-Hernández, Christian G.
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox’s proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales. PMID:27416115
Vergara, Pablo M; Soto, Gerardo E; Moreira-Arce, Darío; Rodewald, Amanda D; Meneses, Luis O; Pérez-Hernández, Christian G
2016-01-01
Theoretical models predict that animals should make foraging decisions after assessing the quality of available habitat, but most models fail to consider the spatio-temporal scales at which animals perceive habitat availability. We tested three foraging strategies that explain how Magellanic woodpeckers (Campephilus magellanicus) assess the relative quality of trees: 1) Woodpeckers with local knowledge select trees based on the available trees in the immediate vicinity. 2) Woodpeckers lacking local knowledge select trees based on their availability at previously visited locations. 3) Woodpeckers using information from long-term memory select trees based on knowledge about trees available within the entire landscape. We observed foraging woodpeckers and used a Brownian Bridge Movement Model to identify trees available to woodpeckers along foraging routes. Woodpeckers selected trees with a later decay stage than available trees. Selection models indicated that preferences of Magellanic woodpeckers were based on clusters of trees near the most recently visited trees, thus suggesting that woodpeckers use visual cues from neighboring trees. In a second analysis, Cox's proportional hazards models showed that woodpeckers used information consolidated across broader spatial scales to adjust tree residence times. Specifically, woodpeckers spent more time at trees with larger diameters and in a more advanced stage of decay than trees available along their routes. These results suggest that Magellanic woodpeckers make foraging decisions based on the relative quality of trees that they perceive and memorize information at different spatio-temporal scales.
Sui, Yiyong; Sun, Chong; Sun, Jianbo; Pu, Baolin; Ren, Wei; Zhao, Weimin
2017-06-09
The stability of an electrodeposited nanocrystalline Ni-based alloy coating in a H₂S/CO₂ environment was investigated by electrochemical measurements, weight loss method, and surface characterization. The results showed that both the cathodic and anodic processes of the Ni-based alloy coating were simultaneously suppressed, displaying a dramatic decrease of the corrosion current density. The corrosion of the Ni-based alloy coating was controlled by H₂S corrosion and showed general corrosion morphology under the test temperatures. The corrosion products, mainly consisting of Ni₃S₂, NiS, or Ni₃S₄, had excellent stability in acid solution. The corrosion rate decreased with the rise of temperature, while the adhesive force of the corrosion scale increased. With the rise of temperature, the deposited morphology and composition of corrosion products changed, the NiS content in the corrosion scale increased, and the stability and adhesive strength of the corrosion scale improved. The corrosion scale of the Ni-based alloy coating was stable, compact, had strong adhesion, and caused low weight loss, so the corrosion rates calculated by the weight loss method cannot reveal the actual oxidation rate of the coating. As the corrosion time was prolonged, the Ni-based coating was thinned while the corrosion scale thickened. The corrosion scale was closely combined with the coating, but cannot fully prevent the corrosive reactants from reaching the substrate.
Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror
NASA Astrophysics Data System (ADS)
Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng
2016-07-01
To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.
Analysis of Scattering from Archival Pulsar Data using a CLEAN-based Method
NASA Astrophysics Data System (ADS)
Tsai, -Wei, Jr.; Simonetti, John H.; Kavic, Michael
2017-02-01
In this work, we adopted a CLEAN-based method to determine the scatter time, τ, from archived pulsar profiles under both the thin screen and uniform medium scattering models and to calculate the scatter time frequency scale index α, where τ \\propto {ν }α . The value of α is -4.4, if a Kolmogorov spectrum of the interstellar medium turbulence is assumed. We deconvolved 1342 profiles from 347 pulsars over a broad range of frequencies and dispersion measures. In our survey, in the majority of cases the scattering effect was not significant compared to pulse profile widths. For a subset of 21 pulsars scattering at the lowest frequencies was large enough to be measured. Because reliable scatter time measurements were determined only for the lowest frequency, we were limited to using upper limits on scatter times at higher frequencies for the purpose of our scatter time frequency slope estimation. We scaled the deconvolved scatter time to 1 GHz assuming α =-4.4 and considered our results in the context of other observations which yielded a broad relation between scatter time and dispersion measure.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Information filtering via a scaling-based function.
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.
Scaling prospects in mechanical energy harvesting with piezo nanowires
NASA Astrophysics Data System (ADS)
Ardila, Gustavo; Hinchet, Ronan; Mouis, Mireille; Montès, Laurent
2013-07-01
The combination of 3D processing technologies, low power circuits and new materials integration makes it conceivable to build autonomous integrated systems, which would harvest their energy from the environment. In this paper, we focus on mechanical energy harvesting and discuss its scaling prospects toward the use of piezoelectric nanostructures, able to be integrated in a CMOS environment. It is shown that direct scaling of present MEMS-based methodologies would be beneficial for high-frequency applications only. For the range of applications which is presently foreseen, a different approach is needed, based on energy harvesting from direct real-time deformation instead of energy harvesting from vibration modes at or close to resonance. We discuss the prospects of such an approach based on simple scaling rules Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.
The SIMS Screen for feigned mental disorders: the development of detection-based scales.
Rogers, Richard; Robinson, Emily V; Gillard, Nathan D
2014-01-01
Time-efficient screens for feigned mental disorders (FMDs) constitute important tools in forensic assessments. The Structured Inventory of Malingered Symptomatology (SIMS) is a 75-item true-false questionnaire that has been extensively studied as an FMD screen. However, the SIMS scales are not based on established detection strategies, and only its total score is utilized as a feigning screen. This investigation develops two new feigning scales based on well-established detection-strategies: rare symptoms (RS) and symptom combinations (SC). They are studied in a between-subjects simulation design using inpatients with partial-malingering (i.e., patients with genuine disorders asked to feign greater disabilities) conditions. Subject to future cross-validation, the SC scale evidenced the highest effect size (d=2.01) and appeared the most effective at ruling out examinees, who have a high likelihood of genuine responding. Copyright © 2014 John Wiley & Sons, Ltd.
Dziarmaga, Jacek; Zurek, Wojciech H.
2014-01-01
Kibble-Zurek mechanism (KZM) uses critical scaling to predict density of topological defects and other excitations created in second order phase transitions. We point out that simply inserting asymptotic critical exponents deduced from the immediate vicinity of the critical point to obtain predictions can lead to results that are inconsistent with a more careful KZM analysis based on causality – on the comparison of the relaxation time of the order parameter with the “time distance” from the critical point. As a result, scaling of quench-generated excitations with quench rates can exhibit behavior that is locally (i.e., in the neighborhood of any given quench rate) well approximated by the power law, but with exponents that depend on that rate, and that are quite different from the naive prediction based on the critical exponents relevant for asymptotically long quench times. Kosterlitz-Thouless scaling (that governs e.g. Mott insulator to superfluid transition in the Bose-Hubbard model in one dimension) is investigated as an example of this phenomenon. PMID:25091996
Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E
NASA Technical Reports Server (NTRS)
Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie
2001-01-01
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.
Solitons of the Kadomtsev-Petviashvili equation based on lattice Boltzmann model
NASA Astrophysics Data System (ADS)
Wang, Huimin
2017-01-01
In this paper, a lattice Boltzmann model for the Kadomtsev-Petviashvili equation is proposed. By using the Chapman-Enskog expansion and the multi-scale time expansion, a series of partial differential equations in different time scales are obtained. Due to the asymmetry in x direction and y direction of the equation, the moments of the equilibrium distribution function are selected are asymmetric. The numerical results demonstrate the lattice Boltzmann method is an effective method to simulate the solitons of the Kadomtsev-Petviashvili equation.
Global Swath and Gridded Data Tiling
NASA Technical Reports Server (NTRS)
Thompson, Charles K.
2012-01-01
This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.
Ultimate limits for quantum magnetometry via time-continuous measurements
NASA Astrophysics Data System (ADS)
Albarelli, Francesco; Rossi, Matteo A. C.; Paris, Matteo G. A.; Genoni, Marco G.
2017-12-01
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as 1/J, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling 1/{J}2 and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
Travel-time tomography in shallow water: experimental demonstration at an ultrasonic scale.
Roux, Philippe; Iturbe, Ion; Nicolas, Barbara; Virieux, Jean; Mars, Jérôme I
2011-09-01
Acoustic tomography in a shallow ultrasonic waveguide is demonstrated at the laboratory scale between two source-receiver arrays. At a 1/1,000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. Two coplanar arrays record the transfer matrix in the time domain of the waveguide between each pair of source-receiver transducers. A time-domain, double-beamforming algorithm is simultaneously performed on the source and receiver arrays that projects the multi-reflected acoustic echoes into an equivalent set of eigenrays, which are characterized by their travel times and their launch and arrival angles. Travel-time differences are measured for each eigenray every 0.1 s when a thermal plume is generated at a given location in the waveguide. Travel-time tomography inversion is then performed using two forward models based either on ray theory or on the diffraction-based sensitivity kernel. The spatially resolved range and depth inversion data confirm the feasibility of acoustic tomography in shallow water. Comparisons are made between inversion results at 1 and 3 MHz with the inversion procedure using ray theory or the finite-frequency approach. The influence of surface fluctuations at the air-water interface is shown and discussed in the framework of shallow-water ocean tomography. © 2011 Acoustical Society of America
Time Burden of Standardized Hip Questionnaires.
Chughtai, Morad; Khlopas, Anton; Mistry, Jaydev B; Gwam, Chukwuweike U; Elmallah, Randa K; Mont, Michael A
2016-04-01
Many standardized scales and questionnaires have been developed to assess outcomes of patients undergoing total hip arthroplasty (THA). However, these surveys can be a burden to both patients and orthopaedists as some are time-inefficient. In addition, there is a paucity of reports assessing the time it takes to complete them. In this study we aimed to: (1) assess how long it takes to complete the most common standardized hip questionnaires; (2) determine the presence of variation in completion time; and (3) evaluate the effects of age, gender, and level of education on completion time. Based on a previous study, we selected the seven most commonly used hip scoring systems-Western Ontario and McMaster Universities Hip Outcome Assessment (WOMAC), Harris Hip Score (HHS), Hip Disability and Osteoarthritis Outcome Score (HOOS), Larson Score, Short-form 36 (SF-36), modified Merle d'Aubigne and Postel Score (MDA), and Lower Extremity Functional Scale (LEFS). The standardized scales and questionnaires were randomly administered to 70 subjects. The subjects were unaware that they were being timed during completion of the questionnaire. We obtained the coefficients of variation of time for each questionnaire. The mean time to complete the questionnaire was then stratified and compared based on age, gender, and level of education. The mean time to complete each of the systems is listed in ascending order: Modified Merle d'Aubigne and Postel Score (MDA), Lower Extremity Functional Scale (LEFS), Western Ontario and McMaster Universities Hip Outcome Assessment (WOMAC), Harris Hip Score (HHS), Larson Score, Hip Disability and Osteoarthritis Outcome Score (HOOS), and Short-form 36 (SF-36). The WOMAC and Larson Score coefficients of variation were the largest, and the HOOS and MDA were the smallest. There was a significantly higher mean time to completion in those who were above or equal to the age of 55 years as compared to those who were below the age of 55 (227 vs. 166 seconds). There was no significant association found in time of completion between gender or education level. Standardized scales and questionnaire which assess THA patients can be burdensome and time-inefficient, which may lead to task-induced fatigue. This may result in inaccurate THA patient assessments, which do not reflect the patient's true state. Future studies should aim to create an encompassing questionnaire that is time efficient and can replace all currently used validated systems.
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
The comparability of different survey designs needs to be established to facilitate integration of data across scales and interpretation of trends over time. Probability-based survey designs are now being investigated to allow condition to be assessed at the watershed scale, an...
Evaluating the Validity and Reliability of a Student Self-Advocacy Teacher Rating Scale
ERIC Educational Resources Information Center
Walick, Christopher M.
2017-01-01
Self-advocacy skills are critical to high school and post-secondary success. Unfortunately, students with disabilities often times struggle with self-advocacy. While there are effective, evidence-based programs to teach self-advocacy skills, there are few scales that directly measure self-advocacy. The current research study was conducted to…
Defining Tsunami Magnitude as Measure of Potential Impact
NASA Astrophysics Data System (ADS)
Titov, V. V.; Tang, L.
2016-12-01
The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977
Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali
2018-01-01
Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.
Investigations on the hierarchy of reference frames in geodesy and geodynamics
NASA Technical Reports Server (NTRS)
Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.
1979-01-01
Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).
Application of particle and lattice codes to simulation of hydraulic fracturing
NASA Astrophysics Data System (ADS)
Damjanac, Branko; Detournay, Christine; Cundall, Peter A.
2016-04-01
With the development of unconventional oil and gas reservoirs over the last 15 years, the understanding and capability to model the propagation of hydraulic fractures in inhomogeneous and naturally fractured reservoirs has become very important for the petroleum industry (but also for some other industries like mining and geothermal). Particle-based models provide advantages over other models and solutions for the simulation of fracturing of rock masses that cannot be assumed to be continuous and homogeneous. It has been demonstrated (Potyondy and Cundall Int J Rock Mech Min Sci Geomech Abstr 41:1329-1364, 2004) that particle models based on a simple force criterion for fracture propagation match theoretical solutions and scale effects derived using the principles of linear elastic fracture mechanics (LEFM). The challenge is how to apply these models effectively (i.e., with acceptable models sizes and computer run times) to the coupled hydro-mechanical problems of relevant time and length scales for practical field applications (i.e., reservoir scale and hours of injection time). A formulation of a fully coupled hydro-mechanical particle-based model and its application to the simulation of hydraulic treatment of unconventional reservoirs are presented. Model validation by comparing with available analytical asymptotic solutions (penny-shape crack) and some examples of field application (e.g., interaction with DFN) are also included.
Gradient plasticity for thermo-mechanical processes in metals with length and time scales
NASA Astrophysics Data System (ADS)
Voyiadjis, George Z.; Faghihi, Danial
2013-03-01
A thermodynamically consistent framework is developed in order to characterize the mechanical and thermal behavior of metals in small volume and on the fast transient time. In this regard, an enhanced gradient plasticity theory is coupled with the application of a micromorphic approach to the temperature variable. A physically based yield function based on the concept of thermal activation energy and the dislocation interaction mechanisms including nonlinear hardening is taken into consideration in the derivation. The effect of the material microstructural interface between two materials is also incorporated in the formulation with both temperature and rate effects. In order to accurately address the strengthening and hardening mechanisms, the theory is developed based on the decomposition of the mechanical state variables into energetic and dissipative counterparts which endowed the constitutive equations to have both energetic and dissipative gradient length scales for the bulk material and the interface. Moreover, the microstructural interaction effect in the fast transient process is addressed by incorporating two time scales into the microscopic heat equation. The numerical example of thin film on elastic substrate or a single phase bicrystal under uniform tension is addressed here. The effects of individual counterparts of the framework on the thermal and mechanical responses are investigated. The model is also compared with experimental results.
NASA Astrophysics Data System (ADS)
Dai, Jun; Zhou, Haigang; Zhao, Shaoquan
2017-01-01
This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.
Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W
This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less
Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre
2011-01-01
The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686
NASA Astrophysics Data System (ADS)
Donner, Reik; Balasis, Georgios; Stolbova, Veronika; Wiedermann, Marc; Georgiou, Marina; Kurths, Jürgen
2016-04-01
Magnetic storms are the most prominent global manifestations of out-of-equilibrium magnetospheric dynamics. Investigating the dynamical complexity exhibited by geomagnetic observables can provide valuable insights into relevant physical processes as well as temporal scales associated with this phenomenon. In this work, we introduce several innovative data analysis techniques enabling a quantitative analysis of the Dst index non-stationary behavior. Using recurrence quantification analysis (RQA) and recurrence network analysis (RNA), we obtain a variety of complexity measures serving as markers of quiet- and storm-time magnetospheric dynamics. We additionally apply these techniques to the main driver of Dst index variations, the V BSouth coupling function and interplanetary medium parameters Bz and Pdyn in order to discriminate internal processes from the magnetosphere's response directly induced by the external forcing by the solar wind. The derived recurrence-based measures allow us to improve the accuracy with which magnetospheric storms can be classified based on ground-based observations. The new methodology presented here could be of significant interest for the space weather research community working on time series analysis for magnetic storm forecasts.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
NASA Astrophysics Data System (ADS)
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Straka, Peter
2018-08-01
Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.
Detectability of Granger causality for subsampled continuous-time neurophysiological processes.
Barnett, Lionel; Seth, Anil K
2017-01-01
Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity from neurophysiological recordings. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient coarse simulation of a growing avascular tumor
Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.
2013-01-01
The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128
Hydrological landscape analysis based on digital elevation data
NASA Astrophysics Data System (ADS)
Seibert, J.; McGlynn, B.; Grabs, T.; Jensco, K.
2008-12-01
Topography is a major factor controlling both hydrological and soil processes at the landscape scale. While this is well-accepted qualitatively, quantifying relationships between topography and spatial variations of hydrologically relevant variables at the landscape scale still remains a challenging research topic. In this presentation, we describe hydrological landscape analysis HLA) as a way to derive relevant topographic indicies to describe the spatial variations of hydrological variables at the landscape scale. We demonstrate our HLA approach with four high-resolution digital elevation models (DEMs) from Sweden, Switzerland and Montana (USA). To investigate scale effects HLA metrics, we compared DEMs of different resolutions. These LiDAR-derived DEMs of 3m, 10m, and 30m, resolution represent catchments of ~ 5 km2 ranging from low to high relief. A central feature of HLA is the flowpath-based analysis of topography and the separation of hillslopes, riparian areas, and the stream network. We included the following metrics: riparian area delineation, riparian buffer potential, separation of stream inflows into right and left bank components, travel time proxies based on flowpath distances and gradients to the channel, and as a hydrologic similarity to the hypsometric curve we suggest the distribution of elevations above the stream network (computed based on the location where a certain flow pathway enters the stream). Several of these indices depended clearly on DEM resolution, whereas this effect was minor for others. While the hypsometric curves all were S-shaped the 'hillslope-hypsometric curves' had the shape of a power function with exponents less than 1. In a similar way we separated flow pathway lengths and gradients between hillslopes and streams and compared a topographic travel time proxy, which was based on the integration of gradients along the flow pathways. Besides the comparison of HLA-metrics for different catchments and DEM resolutions we present examples from experimental catchments to illustrate how these metrics can be used to describe catchment scale hydrological processes and provide context for plot scale observations.
Extreme Precipitation and High-Impact Landslides
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Huffman, George; Peters-Lidard, Christa
2012-01-01
It is well known that extreme or prolonged rainfall is the dominant trigger of landslides; however, there remain large uncertainties in characterizing the distribution of these hazards and meteorological triggers at the global scale. Researchers have evaluated the spatiotemporal distribution of extreme rainfall and landslides at local and regional scale primarily using in situ data, yet few studies have mapped rainfall-triggered landslide distribution globally due to the dearth of landslide data and consistent precipitation information. This research uses a newly developed Global Landslide Catalog (GLC) and a 13-year satellite-based precipitation record from Tropical Rainfall Measuring Mission (TRMM) data. For the first time, these two unique products provide the foundation to quantitatively evaluate the co-occurence of precipitation and rainfall-triggered landslides globally. The GLC, available from 2007 to the present, contains information on reported rainfall-triggered landslide events around the world using online media reports, disaster databases, etc. When evaluating this database, we observed that 2010 had a large number of high-impact landslide events relative to previous years. This study considers how variations in extreme and prolonged satellite-based rainfall are related to the distribution of landslides over the same time scales for three active landslide areas: Central America, the Himalayan Arc, and central-eastern China. Several test statistics confirm that TRMM rainfall generally scales with the observed increase in landslide reports and fatal events for 2010 and previous years over each region. These findings suggest that the co-occurrence of satellite precipitation and landslide reports may serve as a valuable indicator for characterizing the spatiotemporal distribution of landslide-prone areas in order to establish a global rainfall-triggered landslide climatology. This research also considers the sources for this extreme rainfall, citing teleconnections from ENSO as likely contributors to regional precipitation variability. This work demonstrates the potential for using satellite-based precipitation estimates to identify potentially active landslide areas at the global scale in order to improve landslide cataloging and quantify landslide triggering at daily, monthly and yearly time scales.
Dynamic structural disorder in supported nanoscale catalysts
NASA Astrophysics Data System (ADS)
Rehr, J. J.; Vila, F. D.
2014-04-01
We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.
Remote Electronic Examinations: Student Experiences.
ERIC Educational Resources Information Center
Thomas, Peter; Price, Blaine; Paine, Carina; Richards, Mike
2002-01-01
Presents findings from a small-scale experiment investigating the presentation of a synchronous, Web-based remote electronic exam in a distance education course. Discusses student experiences based on a questionnaire; time pressures; technical issues; differences between the structure of an electronic exam and a paper-based exam; and future work,…
GPU Multi-Scale Particle Tracking and Multi-Fluid Simulations of the Radiation Belts
NASA Astrophysics Data System (ADS)
Ziemba, T.; Carscadden, J.; O'Donnell, D.; Winglee, R.; Harnett, E.; Cash, M.
2007-12-01
The properties of the radiation belts can vary dramatically under the influence of magnetic storms and storm-time substorms. The task of understanding and predicting radiation belt properties is made difficult because their properties determined by global processes as well as small-scale wave-particle interactions. A full solution to the problem will require major innovations in technique and computer hardware. The proposed work will demonstrates liked particle tracking codes with new multi-scale/multi-fluid global simulations that provide the first means to include small-scale processes within the global magnetospheric context. A large hurdle to the problem is having sufficient computer hardware that is able to handle the dissipate temporal and spatial scale sizes. A major innovation of the work is that the codes are designed to run of graphics processing units (GPUs). GPUs are intrinsically highly parallelized systems that provide more than an order of magnitude computing speed over a CPU based systems, for little more cost than a high end-workstation. Recent advancements in GPU technologies allow for full IEEE float specifications with performance up to several hundred GFLOPs per GPU and new software architectures have recently become available to ease the transition from graphics based to scientific applications. This allows for a cheap alternative to standard supercomputing methods and should increase the time to discovery. A demonstration of the code pushing more than 500,000 particles faster than real time is presented, and used to provide new insight into radiation belt dynamics.
Scale factor measure method without turntable for angular rate gyroscope
NASA Astrophysics Data System (ADS)
Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua
2018-03-01
In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
5nsec Dead time multichannel scaling system for Mössbauer spectrometer
NASA Astrophysics Data System (ADS)
Verrastro, C.; Trombetta, G.; Pita, A.; Saragovi, C.; Duhalde, S.
1991-11-01
A PC programmable and fast multichannel scaling module has been designed to use a commercial Mössbauer spectrometer. This module is based on a 10 single chip 8 bits microcomputer (MC6805) and on a 35 fast ALU, which allows a high performance and low cost system. The module can operate in a stand-alone mode. Data analysis are performed in real time display, on XT/AT IBM PC or compatibles. The channels are ranged between 256 and 4096, the maximum number of counts is 232-1 per channel, the dwell time is 3 μsec and the dead time between channels is 5 nsec. A friendly software display the real time spectrum and offers menues with different options at each state.
Implementing N-quantum phase gate via circuit QED with qubit-qubit interaction
NASA Astrophysics Data System (ADS)
Said, T.; Chouikh, A.; Essammouni, K.; Bennai, M.
2016-02-01
We propose a method for realizing a quantum phase gate of one qubit simultaneously controlling N target qubits based on the qubit-qubit interaction. We show how to implement the proposed gate with one transmon qubit simultaneously controlling N transmon qubits in a circuit QED driven by a strong microwave field. In our scheme, the operation time of this phase gate is independent of the number N of qubits. On the other hand, this gate can be realized in a time of nanosecond-scale much smaller than the decoherence time and dephasing time both being the time of microsecond-scale. Numerical simulation of the occupation probabilities of the second excited lever shows that the scheme could be achieved efficiently within current technology.
From microseconds to seconds and minutes—time computation in insect hearing
Hartbauer, Manfred; Römer, Heiner
2014-01-01
The computation of time in the auditory system of insects is of relevance at rather different time scales, covering a large range from microseconds to several minutes. At the one end of this range, only a few microseconds of interaural time differences are available for directional hearing, due to the small distance between the ears, usually considered too small to be processed reliably by simple nervous systems. Synapses of interneurons in the afferent auditory pathway are, however, very sensitive to a time difference of only 1–2 ms provided by the latency shift of afferent activity with changing sound direction. At a much larger time scale of several tens of milliseconds to seconds, time processing is important in the context species recognition, but also for those insects where males produce acoustic signals within choruses, and the temporal relationship between song elements strongly deviates from a random distribution. In these situations, some species exhibit a more or less strict phase relationship of song elements, based on phase response properties of their song oscillator. Here we review evidence on how this may influence mate choice decisions. In the same dimension of some tens of milliseconds we find species of katydids with a duetting communication scheme, where one sex only performs phonotaxis to the other sex if the acoustic response falls within a very short time window after its own call. Such time windows show some features unique to insects, and although its neuronal implementation is unknown so far, the similarity with time processing for target range detection in bat echolocation will be discussed. Finally, the time scale being processed must be extended into the range of many minutes, since some acoustic insects produce singing bouts lasting quite long, and female preferences may be based on total signaling time. PMID:24782783
Real-time high-resolution heterodyne-based measurements of spectral dynamics in fibre lasers
Sugavanam, Srikanth; Fabbri, Simon; Le, Son Thai; Lobach, Ivan; Kablukov, Sergey; Khorev, Serge; Churkin, Dmitry
2016-01-01
Conventional tools for measurement of laser spectra (e.g. optical spectrum analysers) capture data averaged over a considerable time period. However, the generation spectrum of many laser types may involve spectral dynamics whose relatively fast time scale is determined by their cavity round trip period, calling for instrumentation featuring both high temporal and spectral resolution. Such real-time spectral characterisation becomes particularly challenging if the laser pulses are long, or they have continuous or quasi-continuous wave radiation components. Here we combine optical heterodyning with a technique of spatio-temporal intensity measurements that allows the characterisation of such complex sources. Fast, round-trip-resolved spectral dynamics of cavity-based systems in real-time are obtained, with temporal resolution of one cavity round trip and frequency resolution defined by its inverse (85 ns and 24 MHz respectively are demonstrated). We also show how under certain conditions for quasi-continuous wave sources, the spectral resolution could be further increased by a factor of 100 by direct extraction of phase information from the heterodyned dynamics or by using double time scales within the spectrogram approach. PMID:26984634
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-07-14
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
Atomistic details of protein dynamics and the role of hydration water
Khodadadi, Sheila; Sokolov, Alexei P.
2016-05-04
The importance of protein dynamics for their biological activity is nowwell recognized. Different experimental and computational techniques have been employed to study protein dynamics, hierarchy of different processes and the coupling between protein and hydration water dynamics. But, understanding the atomistic details of protein dynamics and the role of hydration water remains rather limited. Based on overview of neutron scattering, molecular dynamic simulations, NMR and dielectric spectroscopy results we present a general picture of protein dynamics covering time scales from faster than ps to microseconds and the influence of hydration water on different relaxation processes. Internal protein dynamics spread overmore » a wide time range fromfaster than picosecond to longer than microseconds. We suggest that the structural relaxation in hydrated proteins appears on the microsecond time scale, while faster processes present mostly motion of side groups and some domains. Hydration water plays a crucial role in protein dynamics on all time scales. It controls the coupled protein-hydration water relaxation on 10 100 ps time scale. Our process defines the friction for slower protein dynamics. Analysis suggests that changes in amount of hydration water affect not only general friction, but also influence significantly the protein's energy landscape.« less
Biswas, Sohag; Mallik, Bhabani S
2017-04-12
The fluctuation dynamics of amine stretching frequencies, hydrogen bonds, dangling N-D bonds, and the orientation profile of the amine group of methylamine (MA) were investigated under ambient conditions by means of dispersion-corrected density functional theory-based first principles molecular dynamics (FPMD) simulations. Along with the dynamical properties, various equilibrium properties such as radial distribution function, spatial distribution function, combined radial and angular distribution functions and hydrogen bonding were also calculated. The instantaneous stretching frequencies of amine groups were obtained by wavelet transform of the trajectory obtained from FPMD simulations. The frequency-structure correlation reveals that the amine stretching frequency is weakly correlated with the nearest nitrogen-deuterium distance. The frequency-frequency correlation function has a short time scale of around 110 fs and a longer time scale of about 1.15 ps. It was found that the short time scale originates from the underdamped motion of intact hydrogen bonds of MA pairs. However, the long time scale of the vibrational spectral diffusion of N-D modes is determined by the overall dynamics of hydrogen bonds as well as the dangling ND groups and the inertial rotation of the amine group of the molecule.
Satoh, Katsuhiko
2013-03-07
Thermodynamic parameter Γ and thermodynamic scaling parameter γ for low-frequency relaxation time, which characterize flip-flop motion in a nematic phase, were verified by molecular dynamics simulation with a simple potential based on the Maier-Saupe theory. The parameter Γ, which is the slope of the logarithm for temperature and volume, was evaluated under various conditions at a wide range of temperatures, pressures, and volumes. To simulate thermodynamic scaling so that experimental data at isobaric, isothermal, and isochoric conditions can be rescaled onto a master curve with the parameters for some liquid crystal (LC) compounds, the relaxation time was evaluated from the first-rank orientational correlation function in the simulations, and thermodynamic scaling was verified with the simple potential representing small clusters. A possibility of an equivalence relationship between Γ and γ determined from the relaxation time in the simulation was assessed with available data from the experiments and simulations. In addition, an argument was proposed for the discrepancy between Γ and γ for some LCs in experiments: the discrepancy arises from disagreement of the value of the order parameter P2 rather than the constancy of relaxation time τ1(*) on pressure.
Atomistic details of protein dynamics and the role of hydration water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodadadi, Sheila; Sokolov, Alexei P.
The importance of protein dynamics for their biological activity is nowwell recognized. Different experimental and computational techniques have been employed to study protein dynamics, hierarchy of different processes and the coupling between protein and hydration water dynamics. But, understanding the atomistic details of protein dynamics and the role of hydration water remains rather limited. Based on overview of neutron scattering, molecular dynamic simulations, NMR and dielectric spectroscopy results we present a general picture of protein dynamics covering time scales from faster than ps to microseconds and the influence of hydration water on different relaxation processes. Internal protein dynamics spread overmore » a wide time range fromfaster than picosecond to longer than microseconds. We suggest that the structural relaxation in hydrated proteins appears on the microsecond time scale, while faster processes present mostly motion of side groups and some domains. Hydration water plays a crucial role in protein dynamics on all time scales. It controls the coupled protein-hydration water relaxation on 10 100 ps time scale. Our process defines the friction for slower protein dynamics. Analysis suggests that changes in amount of hydration water affect not only general friction, but also influence significantly the protein's energy landscape.« less
NASA Astrophysics Data System (ADS)
Yoon, S.; Lee, B.; Nakakita, E.; Lee, G.
2016-12-01
Recent climate changes and abnormal weather phenomena have resulted in increased occurrences of localized torrential rainfall. Urban areas in Korea have suffered from localized heavy rainfall, including the notable Seoul flood disaster in 2010 and 2011. The urban hydrological environment has changed in relation to precipitation, such as reduced concentration time, a decreased storage rate, and increased peak discharge. These changes have altered and accelerated the severity of damage to urban areas. In order to prevent such urban flash flood damages, we have to secure the lead time for evacuation through the improvement of radar-based quantitative precipitation forecasting (QPF). The purpose of this research is to improve the QPF products using spatial-scale decomposition method for considering the life time of storm and to assess the accuracy between traditional QPF method and proposed method in terms of urban flood management. The layout of this research is as below. First, this research applies the image filtering to separate the spatial-scale of rainfall field. Second, the separated small and large-scale rainfall fields are extrapolated by each different forecasting method. Third, forecasted rainfall fields are combined at each lead time. Finally, results of this method are evaluated and compared with the results of uniform advection model for urban flood modeling. It is expected that urban flood information using improved QPF will help to reduce casualties and property damage caused by urban flooding through this research.
Scaling forecast models for wind turbulence and wind turbine power intermittency
NASA Astrophysics Data System (ADS)
Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy
2017-04-01
The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.
Miller, William C; Deathe, A Barry; Speechley, Mark
2003-05-01
To evaluate the internal consistency, test-retest reliability, and construct validity of the Activities-specific Balance Confidence (ABC) Scale among people who have a lower-limb amputation. Retest design. A university-affiliated outpatient amputee clinic in Ontario. Two samples of individuals who have unilateral transtibial and transfemoral amputation. Sample 1 (n=54) was a consecutive and sample 2 (n=329) a convenience sample of all members of the clinic population. Not applicable. Repeated application of the ABC Scale, a 16-item questionnaire that assesses confidence in performing various mobility-related tasks. Correlation to test hypothesized relationships between the ABC Scale and the 2-minute walk (2MWT) and the timed up-and-go (TUG) tests; and assessment of the ability of the ABC Scale to discriminate among groups based on amputation cause, amputation level, mobility device use, automatic stepping ability, wearing time, stair climbing ability, and walking distance. Test-retest reliability (intraclass correlation coefficient) of the ABC Scale was .91 (95% confidence interval [CI], .84-.95) with individual item test-retest coefficients ranging from .53 to .87. Internal consistency, measured by Cronbach alpha, was .95. Hypothesized associations with the 2MWT and TUG test were observed with correlations of .72 (95% CI, .56-.84) and -.70 (95% CI, -.82 to -.53), respectively. The ABC Scale discriminated between all groups except those based on amputation level. Balance confidence, as measured by the ABC Scale, is a construct that provides unique information potentially useful to clinicians who provide amputee rehabilitation. The ABC Scale is reliable, with strong support for validity. Study of the scale's responsiveness is recommended.
NASA Technical Reports Server (NTRS)
Barrett, C. E.; Presler, A. F.
1976-01-01
A FORTRAN computer program (COREST) was developed to analyze the high-temperature paralinear oxidation behavior of metals. It is based on a mass-balance approach and uses typical gravimetric input data. COREST was applied to predominantly Cr2O3-forming alloys tested isothermally for long times. These alloys behaved paralinearly above 1100 C as a result of simultaneous scale formation and scale vaporization. Output includes the pertinent formation and vaporization constants and kinetic values of interest. COREST also estimates specific sample weight and specific scale weight as a function of time. Most importantly, from a corrosion standpoint, it estimates specific metal loss.
Karain, Wael I
2017-11-28
Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.
Martin, Stephanie L.-O.; Carek, Andrew M.; Kim, Chang-Sei; Ashouri, Hazar; Inan, Omer T.; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2016-01-01
Pulse transit time (PTT) is being widely pursued for cuff-less blood pressure (BP) monitoring. Most efforts have employed the time delay between ECG and finger photoplethysmography (PPG) waveforms as a convenient surrogate of PTT. However, these conventional pulse arrival time (PAT) measurements include the pre-ejection period (PEP) and the time delay through small, muscular arteries and may thus be an unreliable marker of BP. We assessed a bathroom weighing scale-like system for convenient measurement of ballistocardiography and foot PPG waveforms – and thus PTT through larger, more elastic arteries – in terms of its ability to improve tracking of BP in individual subjects. We measured “scale PTT”, conventional PAT, and cuff BP in humans during interventions that increased BP but changed PEP and smooth muscle contraction differently. Scale PTT tracked the diastolic BP changes well, with correlation coefficient of −0.80 ± 0.02 (mean ± SE) and root-mean-squared-error of 7.6 ± 0.5 mmHg after a best-case calibration. Conventional PAT was significantly inferior in tracking these changes, with correlation coefficient of −0.60 ± 0.04 and root-mean-squared-error of 14.6 ± 1.5 mmHg (p < 0.05). Scale PTT also tracked the systolic BP changes better than conventional PAT but not to an acceptable level. With further development, scale PTT may permit reliable, convenient measurement of BP. PMID:27976741
The Stability of Perceived Pubertal Timing across Adolescence
Cance, Jessica Duncan; Ennett, Susan T.; Morgan-Lopez, Antonio A.; Foshee, Vangie A.
2011-01-01
It is unknown whether perceived pubertal timing changes as puberty progresses or whether it is an important component of adolescent identity formation that is fixed early in pubertal development. The purpose of this study is to examine the stability of perceived pubertal timing among a school-based sample of rural adolescents aged 11 to 17 (N=6,425; 50% female; 53% White). Two measures of pubertal timing were used, stage-normative, based on the Pubertal Development Scale, a self-report scale of secondary sexual characteristics, and peer-normative, a one-item measure of perceived pubertal timing. Two longitudinal methods were used: one-way random effects ANOVA models and latent class analysis. When calculating intraclass correlation coefficients using the one-way random effects ANOVA models, which is based on the average reliability from one time point to the next, both measures had similar, but poor, stability. In contrast, latent class analysis, which looks at the longitudinal response pattern of each individual and treats deviation from that pattern as measurement error, showed three stable and distinct response patterns for both measures: always early, always on-time, and always late. Study results suggest instability in perceived pubertal timing from one age to the next, but this instability is likely due to measurement error. Thus, it may be necessary to take into account the longitudinal pattern of perceived pubertal timing across adolescence rather than measuring perceived pubertal timing at one point in time. PMID:21983873
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
NASA Astrophysics Data System (ADS)
Jiang, Xiaolong; Zhang, Lijuan; Bai, Yang; Liu, Ying; Liu, Zhengkun; Qiu, Keqiang; Liao, Wei; Zhang, Chuanchao; Yang, Ke; Chen, Jing; Jiang, Yilan; Yuan, Xiaodong
2017-07-01
In this work, we experimentally investigate the surface nano-roughness during the inductively coupled plasma etching of fused silica, and discover a novel bi-stage time evolution of surface nano-morphology. At the beginning, the rms roughness, correlation length and nano-mound dimensions increase linearly and rapidly with etching time. At the second stage, the roughening process slows down dramatically. The switch of evolution stage synchronizes with the morphological change from dual-scale roughness comprising long wavelength underlying surface and superimposed nano-mounds to one scale of nano-mounds. A theoretical model based on surface morphological change is proposed. The key idea is that at the beginning, etched surface is dual-scale, and both larger deposition rate of etch inhibitors and better plasma etching resistance at the surface peaks than surface valleys contribute to the roughness development. After surface morphology transforming into one-scale, the difference of plasma resistance between surface peaks and valleys vanishes, thus the roughening process slows down.
Near real-time digital holographic microscope based on GPU parallel computing
NASA Astrophysics Data System (ADS)
Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan
2018-01-01
A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,
Identifying the time scale of synchronous movement: a study on tropical snakes.
Lindström, Tom; Phillips, Benjamin L; Brown, Gregory P; Shine, Richard
2015-01-01
Individual movement is critical to organismal fitness and also influences broader population processes such as demographic stochasticity and gene flow. Climatic change and habitat fragmentation render the drivers of individual movement especially critical to understand. Rates of movement of free-ranging animals through the landscape are influenced both by intrinsic attributes of an organism (e.g., size, body condition, age), and by external forces (e.g., weather, predation risk). Statistical modelling can clarify the relative importance of those processes, because externally-imposed pressures should generate synchronous displacements among individuals within a population, whereas intrinsic factors should generate consistency through time within each individual. External and intrinsic factors may vary in importance at different time scales. In this study we focused on daily displacement of an ambush-foraging snake from tropical Australia (the Northern Death Adder Acanthophis praelongus), based on a radiotelemetric study. We used a mixture of spectral representation and Bayesian inference to study synchrony in snake displacement by phase shift analysis. We further studied autocorrelation in fluctuations of displacement distances as "one over f noise". Displacement distances were positively autocorrelated with all considered noise colour parameters estimated as >0. We show how the methodology can reveal time scales of particular interest for synchrony and found that for the analysed data, synchrony was only present at time scales above approximately three weeks. We conclude that the spectral representation combined with Bayesian inference is a promising approach for analysis of movement data. Applying the framework to telemetry data of A. praelongus, we were able to identify a cut-off time scale above which we found support for synchrony, thus revealing a time scale where global external drivers have a larger impact on the movement behaviour. Our results suggest that for the considered study period, movement at shorter time scales was primarily driven by factors at the individual level; daily fluctuations in weather conditions had little effect on snake movement.
BME Estimation of Residential Exposure to Ambient PM10 and Ozone at Multiple Time Scales
Yu, Hwa-Lung; Chen, Jiu-Chiuan; Christakos, George; Jerrett, Michael
2009-01-01
Background Long-term human exposure to ambient pollutants can be an important contributing or etiologic factor of many chronic diseases. Spatiotemporal estimation (mapping) of long-term exposure at residential areas based on field observations recorded in the U.S. Environmental Protection Agency’s Air Quality System often suffer from missing data issues due to the scarce monitoring network across space and the inconsistent recording periods at different monitors. Objective We developed and compared two upscaling methods: UM1 (data aggregation followed by exposure estimation) and UM2 (exposure estimation followed by data aggregation) for the long-term PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) and ozone exposure estimations and applied them in multiple time scales to estimate PM and ozone exposures for the residential areas of the Health Effects of Air Pollution on Lupus (HEAPL) study. Method We used Bayesian maximum entropy (BME) analysis for the two upscaling methods. We performed spatiotemporal cross-validations at multiple time scales by UM1 and UM2 to assess the estimation accuracy across space and time. Results Compared with the kriging method, the integration of soft information by the BME method can effectively increase the estimation accuracy for both pollutants. The spatiotemporal distributions of estimation errors from UM1 and UM2 were similar. The cross-validation results indicated that UM2 is generally better than UM1 in exposure estimations at multiple time scales in terms of predictive accuracy and lack of bias. For yearly PM10 estimations, both approaches have comparable performance, but the implementation of UM1 is associated with much lower computation burden. Conclusion BME-based upscaling methods UM1 and UM2 can assimilate core and site-specific knowledge bases of different formats for long-term exposure estimation. This study shows that UM1 can perform reasonably well when the aggregation process does not alter the spatiotemporal structure of the original data set; otherwise, UM2 is preferable. PMID:19440491
Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones
NASA Astrophysics Data System (ADS)
Painter, S. L.; Coon, E. T.; Brooks, S. C.
2017-12-01
Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.
Investigation of Calcium Sulfate’s Contribution to Chemical Off Flavor in Baked Items
2013-09-30
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and...studies if any calcium additive is needed . If shelf life and texture are not adversely effected it may prove to be a cost savings to eliminate...point Quality scale to assess the overall aroma and flavor quality. The 9-point Quality scale is based on the Hedonic scale developed by David Peryam and
Jossen, Valentin; Schirmer, Cedric; Mostafa Sindi, Dolman; Eibl, Regine; Kraume, Matthias; Pörtner, Ralf; Eibl, Dieter
2016-01-01
The potential of human mesenchymal stem cells (hMSCs) for allogeneic cell therapies has created a large amount of interest. However, this presupposes the availability of efficient scale-up procedures. Promising results have been reported for stirred bioreactors that operate with microcarriers. Recent publications focusing on microcarrier-based stirred bioreactors have demonstrated the successful use of Computational Fluid Dynamics (CFD) and suspension criteria (N S1u, N S1) for rapidly scaling up hMSC expansions from mL- to pilot scale. Nevertheless, one obstacle may be the formation of large microcarrier-cell-aggregates, which may result in mass transfer limitations and inhomogeneous distributions of stem cells in the culture broth. The dependence of microcarrier-cell-aggregate formation on impeller speed and shear stress levels was investigated for human adipose derived stromal/stem cells (hASCs) at the spinner scale by recording the Sauter mean diameter (d 32) versus time. Cultivation at the suspension criteria provided d 32 values between 0.2 and 0.7 mm, the highest cell densities (1.25 × 106 cells mL−1 hASCs), and the highest expansion factors (117.0 ± 4.7 on day 7), while maintaining the expression of specific surface markers. Furthermore, suitability of the suspension criterion N S1u was investigated for scaling up microcarrier-based processes in wave-mixed bioreactors for the first time. PMID:26981131
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Ting, Yuan-Sen
2018-04-01
The distributions of a galaxy's gas and stars in chemical space encode a tremendous amount of information about that galaxy's physical properties and assembly history. However, present methods for extracting information from chemical distributions are based either on coarse averages measured over galactic scales (e.g. metallicity gradients) or on searching for clusters in chemical space that can be identified with individual star clusters or gas clouds on ˜1 pc scales. These approaches discard most of the information, because in galaxies gas and young stars are observed to be distributed fractally, with correlations on all scales, and the same is likely to be true of metals. In this paper we introduce a first theoretical model, based on stochastically forced diffusion, capable of predicting the multiscale statistics of metal fields. We derive the variance, correlation function, and power spectrum of the metal distribution from first principles, and determine how these quantities depend on elements' astrophysical origin sites and on the large-scale properties of galaxies. Among other results, we explain for the first time why the typical abundance scatter observed in the interstellar media of nearby galaxies is ≈0.1 dex, and we predict that this scatter will be correlated on spatial scales of ˜0.5-1 kpc, and over time-scales of ˜100-300 Myr. We discuss the implications of our results for future chemical tagging studies.
Psychometric properties of stress and anxiety measures among nulliparous women.
Bann, Carla M; Parker, Corette B; Grobman, William A; Willinger, Marian; Simhan, Hyagriv N; Wing, Deborah A; Haas, David M; Silver, Robert M; Parry, Samuel; Saade, George R; Wapner, Ronald J; Elovitz, Michal A; Miller, Emily S; Reddy, Uma M
2017-03-01
To examine the psychometric properties of three measures, the perceived stress scale (PSS), pregnancy experience scale (PES), and state trait anxiety inventory (STAI), for assessing stress and anxiety during pregnancy among a large sample of nulliparous women. The sample included 10,002 pregnant women participating in the Nulliparous Pregnancy Outcomes Study: Monitoring Mothers-to-Be (nMoM2b). Internal consistency reliability was assessed with Cronbach's alpha and factorial validity with confirmatory factor analyses. Intraclass correlations (ICCs) were calculated to determine stability of PSS scales over time. Psychometric properties were examined for the overall sample, as well as subgroups based on maternal age, race/ethnicity and language. All three scales demonstrated good internal consistency reliability. Confirmatory factor analyses supported the factor structures of the PSS and the PES. However, a one-factor solution of the trait-anxiety subscale from the STAI did not fit well; a two-factor solution, splitting the items into factors based on direction of item wording (positive versus negative) provided a better fit. Scores on the PSS were generally stable over time (ICC = 0.60). Subgroup analyses revealed a few items that did not perform well on Spanish versions of the scales. Overall, the scales performed well, suggesting they could be useful tools for identifying women experiencing high levels of stress and anxiety during pregnancy and allowing for the implementation of interventions to help reduce maternal stress and anxiety.
objective of this report is to characterize well-being at multiple scales in order to evaluate the relationship of service flows in terms of sustainable well-being. The HWBI results presented represent snapshot assessments for the 2000-2010 time period. Based on the spatial and t...
Nonlinear Image Denoising Methodologies
2002-05-01
53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Volcanoes: observations and impact
Thurber, Clifford; Prejean, Stephanie G.
2012-01-01
Volcanoes are critical geologic hazards that challenge our ability to make long-term forecasts of their eruptive behaviors. They also have direct and indirect impacts on human lives and society. As is the case with many geologic phenomena, the time scales over which volcanoes evolve greatly exceed that of a human lifetime. On the other hand, the time scale over which a volcano can move from inactivity to eruption can be rather short: months, weeks, days, and even hours. Thus, scientific study and monitoring of volcanoes is essential to mitigate risk. There are thousands of volcanoes on Earth, and it is impractical to study and implement ground-based monitoring at them all. Fortunately, there are other effective means for volcano monitoring, including increasing capabilities for satellite-based technologies.
Speech transformations based on a sinusoidal representation
NASA Astrophysics Data System (ADS)
Quatieri, T. E.; McAulay, R. J.
1986-05-01
A new speech analysis/synthesis technique is presented which provides the basis for a general class of speech transformation including time-scale modification, frequency scaling, and pitch modification. These modifications can be performed with a time-varying change, permitting continuous adjustment of a speaker's fundamental frequency and rate of articulation. The method is based on a sinusoidal representation of the speech production mechanism that has been shown to produce synthetic speech that preserves the waveform shape and is essentially perceptually indistinguishable from the original. Although the analysis/synthesis system originally was designed for single-speaker signals, it is equally capable of recovering and modifying nonspeech signals such as music; multiple speakers, marine biologic sounds, and speakers in the presence of interferences such as noise and musical backgrounds.
Temporal variation and scaling of parameters for a monthly hydrologic model
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang
2018-03-01
The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.
Development and initial validation of a cognitive-based work-nonwork conflict scale.
Ezzedeen, Souha R; Swiercz, Paul M
2007-06-01
Current research related to work and life outside work specifies three types of work-nonwork conflict: time, strain, and behavior-based. Overlooked in these models is a cognitive-based type of conflict whereby individuals experience work-nonwork conflict from cognitive preoccupation with work. Four studies on six different groups (N=549) were undertaken to develop and validate an initial measure of this construct. Structural equation modeling confirmed a two-factor, nine-item scale. Hypotheses regarding cognitive-based conflict's relationship with life satisfaction, work involvement, work-nonwork conflict, and work hours were supported. The relationship with knowledge work was partially supported in that only the cognitive dimension of cognitive-based conflict was related to extent of knowledge work. Hypotheses regarding cognitive-based conflict's relationship with family demands were rejected in that the cognitive dimension correlated positively rather than negatively with number of dependent children and perceived family demands. The study provides encouraging preliminary evidence of scale validity.
Cross scale interactions, nonlinearities, and forecasting catastrophic events
Peters, Debra P.C.; Pielke, Roger A.; Bestelmeyer, Brandon T.; Allen, Craig D.; Munson-McGee, Stuart; Havstad, Kris M.
2004-01-01
Catastrophic events share characteristic nonlinear behaviors that are often generated by cross-scale interactions and feedbacks among system elements. These events result in surprises that cannot easily be predicted based on information obtained at a single scale. Progress on catastrophic events has focused on one of the following two areas: nonlinear dynamics through time without an explicit consideration of spatial connectivity [Holling, C. S. (1992) Ecol. Monogr. 62, 447–502] or spatial connectivity and the spread of contagious processes without a consideration of cross-scale interactions and feedbacks [Zeng, N., Neeling, J. D., Lau, L. M. & Tucker, C. J. (1999) Science 286, 1537–1540]. These approaches rarely have ventured beyond traditional disciplinary boundaries. We provide an interdisciplinary, conceptual, and general mathematical framework for understanding and forecasting nonlinear dynamics through time and across space. We illustrate the generality and usefulness of our approach by using new data and recasting published data from ecology (wildfires and desertification), epidemiology (infectious diseases), and engineering (structural failures). We show that decisions that minimize the likelihood of catastrophic events must be based on cross-scale interactions, and such decisions will often be counterintuitive. Given the continuing challenges associated with global change, approaches that cross disciplinary boundaries to include interactions and feedbacks at multiple scales are needed to increase our ability to predict catastrophic events and develop strategies for minimizing their occurrence and impacts. Our framework is an important step in developing predictive tools and designing experiments to examine cross-scale interactions.
Cleanthous, Sophie; Kinter, Elizabeth; Marquis, Patrick; Petrillo, Jennifer; You, Xiaojun; Wakeford, Craig; Sabatella, Guido
2017-01-01
Background Study objectives were to evaluate the Multiple Sclerosis Impact Scale (MSIS-29) and explore an optimized scoring structure based on empirical post-hoc analyses of data from the Phase III ADVANCE clinical trial. Methods ADVANCE MSIS-29 data from six time-points were analyzed in a sample of patients with relapsing–remitting multiple sclerosis (RRMS). Rasch Measurement Theory (RMT) analysis was undertaken to examine three broad areas: sample-to-scale targeting, measurement scale properties, and sample measurement validity. Interpretation of results led to an alternative MSIS-29 scoring structure, further evaluated alongside responsiveness of the original and revised scales at Week 48. Results RMT analysis provided mixed evidence for Physical and Psychological Impact scales that were sub-optimally targeted at the lower functioning end of the scales. Their conceptual basis could also stand to improve based on item fit results. The revised MSIS-29 rescored scales improved but did not resolve the measurement scale properties and targeting of the MSIS-29. In two out of three revised scales, responsiveness analysis indicated strengthened ability to detect change. Conclusion The revised MSIS-29 provides an initial evidence-based improved patient-reported outcome (PRO) instrument for evaluating the impact of MS. Revised scoring improves conceptual clarity and interpretation of scores by refining scale structure to include Symptoms, Psychological Impact, and General Limitations. Clinical trial ADVANCE (ClinicalTrials.gov identifier NCT00906399). PMID:29104758
NASA Astrophysics Data System (ADS)
McQuinn, Kristen B. W.; Skillman, Evan D.; Heilman, Taryn N.; Mitchell, Noah P.; Kelley, Tyler
2018-07-01
Winds are predicted to be ubiquitous in low-mass, actively star-forming galaxies. Observationally, winds have been detected in relatively few local dwarf galaxies, with even fewer constraints placed on their time-scales. Here, we compare galactic outflows traced by diffuse, soft X-ray emission from Chandra Space Telescope archival observations to the star formation histories derived from Hubble Space Telescope imaging of the resolved stellar populations in six starburst dwarfs. We constrain the longevity of a wind to have an upper limit of 25 Myr based on galaxies whose starburst activity has already declined, although a larger sample is needed to confirm this result. We find an average 16 per cent efficiency for converting the mechanical energy of stellar feedback to thermal, soft X-ray emission on the 25 Myr time-scale, somewhat higher than simulations predict. The outflows have likely been sustained for time-scales comparable to the duration of the starbursts (i.e. 100s Myr), after taking into account the time for the development and cessation of the wind. The wind time-scales imply that material is driven to larger distances in the circumgalactic medium than estimated by assuming short, 5-10 Myr starburst durations, and that less material is recycled back to the host galaxy on short time-scales. In the detected outflows, the expelled hot gas shows various morphologies that are not consistent with a simple biconical outflow structure. The sample and analysis are part of a larger program, the STARBurst IRregular Dwarf Survey (STARBIRDS), aimed at understanding the life cycle and impact of starburst activity in low-mass systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu
2014-01-21
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less
Temporal scaling of the growth dependent optical properties of microalgae
NASA Astrophysics Data System (ADS)
Zhao, J. M.; Ma, C. Y.; Liu, L. H.
2018-07-01
The optical properties of microalgae are basic parameters for analyzing light field distribution in photobioreactors (PBRs). With the growth of microalgae cell, their optical properties will vary with growth time due to accumulation of pigment and lipid, cell division and metabolism. In this work, we report a temporal scaling behavior of the growth dependent optical properties of microalgae cell suspensions with both experimental and theoretical evidence presented. A new concept, the temporal scaling function (TSF), defined as the ratio of absorption or scattering cross-sections at growth phase to that at stationary phase, is introduced to characterize the temporal scaling behavior. The temporal evolution and temporal scaling characteristics of the absorption and scattering cross-sections of three example microalgae species, Chlorella vulgaris, Chlorella pyrenoidosa, and Chlorella protothecoides, were experimentally studied at spectral range 380-850 nm. It is shown that the TSFs of the absorption and scattering cross-sections for different microalgae species are approximately constant at different wavelength, which confirms theoretical predictions very well. With the aid of the temporal scaling relation, the optical properties at any growth time can be calculated based on those measured at stationary phase, hence opens a new way to determine the time-dependent optical properties of microalgae. The findings of this work will help the understanding of time dependent optical properties of microalgae and facilitate their applications in light field analysis in PBRs design.
Wavelet transforms with discrete-time continuous-dilation wavelets
NASA Astrophysics Data System (ADS)
Zhao, Wei; Rao, Raghuveer M.
1999-03-01
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
Sui, Yiyong; Sun, Chong; Sun, Jianbo; Pu, Baolin; Ren, Wei; Zhao, Weimin
2017-01-01
The stability of an electrodeposited nanocrystalline Ni-based alloy coating in a H2S/CO2 environment was investigated by electrochemical measurements, weight loss method, and surface characterization. The results showed that both the cathodic and anodic processes of the Ni-based alloy coating were simultaneously suppressed, displaying a dramatic decrease of the corrosion current density. The corrosion of the Ni-based alloy coating was controlled by H2S corrosion and showed general corrosion morphology under the test temperatures. The corrosion products, mainly consisting of Ni3S2, NiS, or Ni3S4, had excellent stability in acid solution. The corrosion rate decreased with the rise of temperature, while the adhesive force of the corrosion scale increased. With the rise of temperature, the deposited morphology and composition of corrosion products changed, the NiS content in the corrosion scale increased, and the stability and adhesive strength of the corrosion scale improved. The corrosion scale of the Ni-based alloy coating was stable, compact, had strong adhesion, and caused low weight loss, so the corrosion rates calculated by the weight loss method cannot reveal the actual oxidation rate of the coating. As the corrosion time was prolonged, the Ni-based coating was thinned while the corrosion scale thickened. The corrosion scale was closely combined with the coating, but cannot fully prevent the corrosive reactants from reaching the substrate. PMID:28772995
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
Thick strings, the liquid crystal blue phase, and cosmological large-scale structure
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1992-01-01
A phenomenological model based on the liquid crystal blue phase is proposed as a model for a late-time cosmological phase transition. Topological defects, in particular thick strings and/or domain walls, are presented as seeds for structure formation. It is shown that the observed large-scale structure, including quasi-periodic wall structure, can be well fitted in the model without violating the microwave background isotropy bound or the limits from induced gravitational waves and the millisecond pulsar timing. Furthermore, such late-time transitions can produce objects such as quasars at high redshifts. The model appears to work with either cold or hot dark matter.
Environment spectrum and coherence behaviours in a rare-earth doped crystal for quantum memory.
Gong, Bo; Tu, Tao; Zhou, Zhong-Quan; Zhu, Xing-Yu; Li, Chuan-Feng; Guo, Guang-Can
2017-12-21
We theoretically investigate the dynamics of environment and coherence behaviours of the central ion in a quantum memory based on a rare-earth doped crystal. The interactions between the central ion and the bath spins suppress the flip-flop rate of the neighbour bath spins and yield a specific environment spectral density S(ω). Under dynamical decoupling pulses, this spectrum provides a general scaling for the coherence envelope and coherence time, which significantly extend over a range on an hour-long time scale. The characterized environment spectrum with ultra-long coherence time can be used to implement various quantum communication and information processing protocols.
Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis
NASA Technical Reports Server (NTRS)
Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.
2015-01-01
This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.
NASA Astrophysics Data System (ADS)
Huynh, Tan Vu; Messinger, Robert J.; Sarou-Kanian, Vincent; Fayon, Franck; Bouchet, Renaud; Deschamps, Michaël
2017-10-01
The intrinsic ionic conductivity of polyethylene oxide (PEO)-based block copolymer electrolytes is often assumed to be identical to the conductivity of the PEO homopolymer. Here, we use high-field 7Li nuclear magnetic resonance (NMR) relaxation and pulsed-field-gradient (PFG) NMR diffusion measurements to probe lithium ion dynamics over nanosecond and millisecond time scales in PEO and polystyrene (PS)-b-PEO-b-PS electrolytes containing the lithium salt LiTFSI. Variable-temperature longitudinal (T1) and transverse (T2) 7Li NMR relaxation rates were acquired at three magnetic field strengths and quantitatively analyzed for the first time at such fields, enabling us to distinguish two characteristic time scales that describe fluctuations of the 7Li nuclear electric quadrupolar interaction. Fast lithium motions [up to O (ns)] are essentially identical between the two polymer electrolytes, including sub-nanosecond vibrations and local fluctuations of the coordination polyhedra between lithium and nearby oxygen atoms. However, lithium dynamics over longer time scales [O (10 ns) and greater] are slower in the block copolymer compared to the homopolymer, as manifested experimentally by their different transverse 7Li NMR relaxation rates. Restricted dynamics and altered thermodynamic behavior of PEO chains anchored near PS domains likely explain these results.
NASA Astrophysics Data System (ADS)
Quiroz, M.; Cienfuegos, R.
2017-12-01
At present, there is good knowledge acquired by the scientific community on characterizing the evolution of tsunami energy at ocean and shelf scales. For instance, the investigations of Rabinovich (2013) and Yamazaki (2011), represent some important advances in this subject. In the present paper we rather focus on tsunami energy evolution, and ultimately its decay, in coastal areas because characteristic time scales of this process has implications for early warning, evacuation initiation, and cancelling. We address the tsunami energy evolution analysis at three different spatial scales, a global scale at the ocean basin level, in particular the Pacific Ocean basin, a regional scale comprising processes that occur at the continental shelf level, and finally a local scale comprising coastal areas or bays. These scales were selected following the motivation to understand how the response is associated with tsunami, and how the energy evolves until it is completely dissipated. Through signal processing methods, such as discrete and wavelets analysis, we analyze time series of recent tsunamigenic events in the main Chilean coastal cities. Based on this analysis, we propose a conceptual model based on the influence of geomorphological variables on the evolution and decay of tsunami energy. This model acts as a filter from the seismic source to the observed response in coastal zones. Finally, we hope to conclude with practical tools that will establish patterns of behavior and scaling of energy evolution through interconnections from seismic source variables and the geomorphological component to understand the response and predict behavior for a given site.
Fractal Signals & Space-Time Cartoons
NASA Astrophysics Data System (ADS)
Oetama, H. C. Jakob; Maksoed, W. H.
2016-03-01
In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.
Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.
Bohley, Christian; Heuer, Jana; Stannarius, Ralf
2005-12-01
We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.
NASA Astrophysics Data System (ADS)
Huang, X.; Aldering, G.; Biederman, M.; Herger, B.
2017-11-01
For Type Ia supernovae (SNe Ia) observed through a nonuniform interstellar medium (ISM) in its host galaxy, we investigate whether the nonuniformity can cause observable time variations in dust extinction and in gas absorption due to the expansion of the SN photosphere with time. We show that, owing to the steep spectral index of the ISM density power spectrum, sizable density fluctuation amplitudes at the length scale of typical ISM structures (≳ 10 {pc}) will translate to much smaller fluctuations on the scales of an SN photosphere. Therefore, the typical amplitude of time variation due to a nonuniform ISM, of absorption equivalent widths, and of extinction, would be small. As a result, we conclude that nonuniform ISM density should not impact cosmology measurements based on SNe Ia. We apply our predictions based on the ISM density power-law power spectrum to the observations of two highly reddened SNe Ia, SN 2012cu and SN 2014J.
NASA Astrophysics Data System (ADS)
Huang, Xiaosheng; Aldering, Gregory; Biederman, Moriah; Herger, Brendan
2018-01-01
For Type Ia supernovae (SNe Ia) observed through a non-uniform interstellar medium (ISM) in its host galaxy, we investigate whether the non-uniformity can cause observable time variations in dust extinction and in gas absorption due to the expansion of the SN photosphere with time. We show that, owing to the steep spectral index of the ISM density power spectrum, sizable density fluctuation amplitudes at the length scale of typical ISM structures (>~ 10 pc) will translate to much smaller fluctuations on the scales of a SN photosphere. Therefore the typical amplitude of time variation due to non-uniform ISM, of absorption equivalent widths and of extinction, would be small. As a result, we conclude that non-uniform ISM density should not impact cosmology measurements based on SNe Ia. We apply our predictions based on the ISM density power law power spectrum to the observations of two highly reddened SNe Ia, SN 2012cu and SN 2014J.
Horst, Fabian; Eekhoff, Alexander; Newell, Karl M; Schöllhorn, Wolfgang I
2017-01-01
Traditionally, gait analysis has been centered on the idea of average behavior and normality. On one hand, clinical diagnoses and therapeutic interventions typically assume that average gait patterns remain constant over time. On the other hand, it is well known that all our movements are accompanied by a certain amount of variability, which does not allow us to make two identical steps. The purpose of this study was to examine changes in the intra-individual gait patterns across different time-scales (i.e., tens-of-mins, tens-of-hours). Nine healthy subjects performed 15 gait trials at a self-selected speed on 6 sessions within one day (duration between two subsequent sessions from 10 to 90 mins). For each trial, time-continuous ground reaction forces and lower body joint angles were measured. A supervised learning model using a kernel-based discriminant regression was applied for classifying sessions within individual gait patterns. Discernable characteristics of intra-individual gait patterns could be distinguished between repeated sessions by classification rates of 67.8 ± 8.8% and 86.3 ± 7.9% for the six-session-classification of ground reaction forces and lower body joint angles, respectively. Furthermore, the one-on-one-classification showed that increasing classification rates go along with increasing time durations between two sessions and indicate that changes of gait patterns appear at different time-scales. Discernable characteristics between repeated sessions indicate continuous intrinsic changes in intra-individual gait patterns and suggest a predominant role of deterministic processes in human motor control and learning. Natural changes of gait patterns without any externally induced injury or intervention may reflect continuous adaptations of the motor system over several time-scales. Accordingly, the modelling of walking by means of average gait patterns that are assumed to be near constant over time needs to be reconsidered in the context of these findings, especially towards more individualized and situational diagnoses and therapy.
Stochastic simulation and decadal prediction of hydroclimate in the Western Himalayas
NASA Astrophysics Data System (ADS)
Robertson, A. W.; Chekroun, M. D.; Cook, E.; D'Arrigo, R.; Ghil, M.; Greene, A. M.; Holsclaw, T.; Kondrashov, D. A.; Lall, U.; Lu, M.; Smyth, P.
2012-12-01
Improved estimates of climate over the next 10 to 50 years are needed for long-term planning in water resource and flood management. However, the task of effectively incorporating the results of climate change research into decision-making face a ``double conflict of scales'': the temporal scales of climate model projections are too long, while their usable spatial scales (global to planetary) are much larger than those needed for actual decision making (at the regional to local level). This work is designed to help tackle this ``double conflict'' in the context of water management over monsoonal Asia, based on dendroclimatic multi-century reconstructions of drought indices and river flows. We identify low-frequency modes of variability with time scales from interannual to interdecadal based on these series, and then generate future scenarios based on (a) empirical model decadal predictions, and (b) stochastic simulations generated with autoregressive models that reproduce the power spectrum of the data. Finally, we consider how such scenarios could be used to develop reservoir optimization models. Results will be presented based on multi-century Upper Indus river discharge reconstructions that exhibit a strong periodicity near 27 years that is shown to yield some retrospective forecasting skill over the 1700-2000 period, at a 15-yr yield time. Stochastic simulations of annual PDSI drought index values over the Upper Indus basin are constructed using Empirical Model Reduction; their power spectra are shown to be quite realistic, with spectral peaks near 5--8 years.
NASA Astrophysics Data System (ADS)
Nogueira, M.
2017-10-01
Monthly-to-decadal variability of the regional precipitation over Intertropical Convergence Zone and north-Atlantic and north-Pacific storm tracks was investigated using ERA-20C reanalysis. Satellite-based precipitation (
Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal
NASA Astrophysics Data System (ADS)
Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana
2014-05-01
The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling the properties of a long-range correlated process.
NASA Astrophysics Data System (ADS)
Thomas, Christoph K.; Kennedy, Adam M.; Selker, John S.; Moretti, Ayla; Schroth, Martin H.; Smoot, Alexander R.; Tufillaro, Nicholas B.; Zeeman, Matthias J.
2012-02-01
We present a novel approach based on fibre-optic distributed temperature sensing (DTS) to measure the two-dimensional thermal structure of the surface layer at high resolution (0.25 m, ≈0.5 Hz). Air temperature observations obtained from a vertically-oriented fibre-optics array of approximate dimensions 8 m × 8 m and sonic anemometer data from two levels were collected over a short grass field located in the flat bottom of a wide valley with moderate surface heterogeneity. The objectives of the study were to evaluate the potential of the DTS technique to study small-scale processes in the surface layer over a wide range of atmospheric stability, and to analyze the space-time dynamics of transient cold-air pools in the calm boundary layer. The time response and precision of the fibre-based temperatures were adequate to resolve individual sub-metre sized turbulent and non-turbulent structures, of time scales of seconds, in the convective, neutral, and stable surface layer. Meaningful sensible heat fluxes were computed using the eddy-covariance technique when combined with vertical wind observations. We present a framework that determines the optimal environmental conditions for applying the fibre-optics technique in the surface layer and identifies areas for potentially significant improvements of the DTS performance. The top of the transient cold-air pool was highly non-stationary indicating a superposition of perturbations of different time and length scales. Vertical eddy scales in the strongly stratified transient cold-air pool derived from the DTS data agreed well with the buoyancy length scale computed using the vertical velocity variance and the Brunt-Vaisala frequency, while scales for weak stratification disagreed. The high-resolution DTS technique opens a new window into spatially sampling geophysical fluid flows including turbulent energy exchange.
Towards Remotely Sensed Composite Global Drought Risk Modelling
NASA Astrophysics Data System (ADS)
Dercas, Nicholas; Dalezios, Nicolas
2015-04-01
Drought is a multi-faceted issue and requires a multi-faceted assessment. Droughts may have the origin on precipitation deficits, which sequentially and by considering different time and space scales may impact soil moisture, plant wilting, stream flow, wildfire, ground water levels, famine and social impacts. There is a need to monitor drought even at a global scale. Key variables for monitoring drought include climate data, soil moisture, stream flow, ground water, reservoir and lake levels, snow pack, short-medium-long range forecasts, vegetation health and fire danger. However, there is no single definition of drought and there are different drought indicators and indices even for each drought type. There are already four operational global drought risk monitoring systems, namely the U.S. Drought Monitor, the European Drought Observatory (EDO), the African and the Australian systems, respectively. These systems require further research to improve the level of accuracy, the time and space scales, to consider all types of drought and to achieve operational efficiency, eventually. This paper attempts to contribute to the above mentioned objectives. Based on a similar general methodology, the multi-indicator approach is considered. This has resulted from previous research in the Mediterranean region, an agriculturally vulnerable region, using several drought indices separately, namely RDI and VHI. The proposed scheme attempts to consider different space scaling based on agroclimatic zoning through remotely sensed techniques and several indices. Needless to say, the agroclimatic potential of agricultural areas has to be assessed in order to achieve sustainable and efficient use of natural resources in combination with production maximization. Similarly, the time scale is also considered by addressing drought-related impacts affected by precipitation deficits on time scales ranging from a few days to a few months, such as non-irrigated agriculture, topsoil moisture, wildfire danger, range and pasture conditions and unregulated stream flows. Keywords Remote sensing; Composite Drought Indicators; Global Drought Risk Monitoring.
Drought forecasting in Luanhe River basin involving climatic indices
NASA Astrophysics Data System (ADS)
Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.
2017-11-01
Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.
Volpe, Daniele; Giantin, Maria Giulia; Manuela, Pilleri; Filippetto, Consuelo; Pelosin, Elisa; Abbruzzese, Giovanni; Antonini, Angelo
2017-08-01
To compare the efficacy of two physiotherapy protocols (water-based vs. non-water-based) on postural deformities of patients with Parkinson's disease. A single blind, randomized controlled pilot study. Inpatient (Rehabilitative Department). A total of 30 patients with idiopathic Parkinson's disease. Participants were randomly assigned to one of two eight-week treatment groups: Water-based ( n = 15) or non-water-based physiotherapy exercises ( n = 15). Changes in the degree of cervical and dorsal flexion and in the angle of lateral inclination of the trunk (evaluated by means of a posturographic system) were used as primary outcomes. Unified Parkinson Disease Rating Scale section III, Time Up and Go Test, Berg Balance Scale, Activities-specific Balance Confidence, Falls Efficacy Scale and the Parkinson's disease quality of life questionnaire (39 items) were the secondary outcomes. All outcomes were assessed at baseline, at the end of training and eight weeks after treatment. Patients were always tested at the time of their optimal antiparkinsonian medication ('on' phase). After the treatment, only Parkinson's disease subjects randomized to water-based treatment showed a significant improvement of trunk posture with a significant reduction of cervical flexion (water-based group: -65.2°; non-water-based group: +1.7°) and dorsal flexion (water-based group: -22.5°; non-water-based group: -6.5°) and lateral inclination of the trunk (water-based group: -2.3°; non-water-based group: +0.3°). Both groups presented significant improvements in the secondary clinical outcomes without between-group differences. Our results show that water-based physiotherapy was effective for improving postural deformities in patients with Parkinson's disease.
Development of a scale of executive functioning for the RBANS.
Spencer, Robert J; Kitchen Andren, Katherine A; Tolle, Kathryn A
2018-01-01
The Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) is a cognitive battery that contains scales of several cognitive abilities, but no scale in the instrument is exclusively dedicated to executive functioning. Although the subtests allow for observation of executive-type errors, each error is of fairly low base rate, and healthy and clinical normative data are lacking on the frequency of these types of errors, making their significance difficult to interpret in isolation. The aim of this project was to create an RBANS executive errors scale (RBANS EE) with items comprised of qualitatively dysexecutive errors committed throughout the test. Participants included Veterans referred for outpatient neuropsychological testing. Items were initially selected based on theoretical literature and were retained based on item-total correlations. The RBANS EE (a percentage calculated by dividing the number of dysexecutive errors by the total number of responses) was moderately related to each of seven established measures of executive functioning and was strongly predictive of dichotomous classification of executive impairment. Thus, the scale had solid concurrent validity, justifying its use as a supplementary scale. The RBANS EE requires no additional administration time and can provide a quantified measure of otherwise unmeasured aspects of executive functioning.
Scope of Nursing Care in Polish Intensive Care Units
Wysokiński, Mariusz; Ksykiewicz-Dorota, Anna; Fidecki, Wiesław
2013-01-01
Introduction. The TISS-28 scale, which may be used for nursing staff scheduling in ICU, does not reflect the complete scope of nursing resulting from varied cultural and organizational conditions of individual systems of health care. Aim. The objective of the study was an attempt to provide an answer to the question what scope of nursing care provided by Polish nurses in ICU does the TISS-28 scale reflect? Material and Methods. The methods of working time measurement were used in the study. For the needs of the study, 252 hours of continuous observation (day-long observation) and 3.697 time-schedule measurements were carried out. Results. The total nursing time was 4125.79 min. (68.76 hours), that is, 60.15% of the total working time of Polish nurses during the period analyzed. Based on the median test, the difference was observed on the level of χ 2 = 16945.8,P < 0.001 between the nurses' workload resulting from performance of activities qualified into the TISS-28 scale and load resulting from performance of interventions within the scopes of care not considered in this scale in Polish ICUs. Conclusions. The original version of the TISS-28 scale does not fully reflect the workload among Polish nurses employed in ICUs. PMID:24490162
Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach
NASA Technical Reports Server (NTRS)
Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.
2003-01-01
"Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.
Talbot, Karley-Dale S; Kerns, Kimberly A
2014-11-01
The current study examined prospective memory (PM, both time-based and event-based) and time estimation (TR, a time reproduction task) in children with and without attention deficit hyperactivity disorder (ADHD). This study also investigated the influence of task performance and TR on time-based PM in children with ADHD relative to controls. A sample of 69 children, aged 8 to 13 years, completed the CyberCruiser-II time-based PM task, a TR task, and the Super Little Fisherman event-based PM task. PM performance was compared with children's TR abilities, parental reports of daily prospective memory disturbances (Prospective and Retrospective Memory Questionnaire for Children, PRMQC), and ADHD symptomatology (Conner's rating scales). Children with ADHD scored more poorly on event-based PM, time-based PM, and TR; interestingly, TR did not appear related to performance on time-based PM. In addition, it was found that PRMQC scores and ADHD symptom severity were related to performance on the time-based PM task but not to performance on the event-based PM task. These results provide some limited support for theories that propose a distinction between event-based PM and time-based PM. Copyright © 2014 Elsevier Inc. All rights reserved.
Kinetic Modeling of Slow Energy Release in Non-Ideal Carbon Rich Explosives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitello, P; Fried, L; Glaesemann, K
2006-06-20
We present here the first self-consistent kinetic based model for long time-scale energy release in detonation waves in the non-ideal explosive LX-17. Non-ideal, insensitive carbon rich explosives, such as those based on TATB, are believed to have significant late-time slow release in energy. One proposed source of this energy is diffusion-limited growth of carbon clusters. In this paper we consider the late-time energy release problem in detonation waves using the thermochemical code CHEETAH linked to a multidimensional ALE hydrodynamics model. The linked CHEETAH-ALE model dimensional treats slowly reacting chemical species using kinetic rate laws, with chemical equilibrium assumed for speciesmore » coupled via fast time-scale reactions. In the model presented here we include separate rate equations for the transformation of the un-reacted explosive to product gases and for the growth of a small particulate form of condensed graphite to a large particulate form. The small particulate graphite is assumed to be in chemical equilibrium with the gaseous species allowing for coupling between the instantaneous thermodynamic state and the production of graphite clusters. For the explosive burn rate a pressure dependent rate law was used. Low pressure freezing of the gas species mass fractions was also included to account for regions where the kinetic coupling rates become longer than the hydrodynamic time-scales. The model rate parameters were calibrated using cylinder and rate-stick experimental data. Excellent long time agreement and size effect results were achieved.« less
NASA Astrophysics Data System (ADS)
Bengulescu, Marc; Blanc, Philippe; Wald, Lucien
2016-04-01
An analysis of the variability of the surface solar irradiance (SSI) at different local time-scales is presented in this study. Since geophysical signals, such as long-term measurements of the SSI, are often produced by the non-linear interaction of deterministic physical processes that may also be under the influence of non-stationary external forcings, the Hilbert-Huang transform (HHT), an adaptive, noise-assisted, data-driven technique, is employed to extract locally - in time and in space - the embedded intrinsic scales at which a signal oscillates. The transform consists of two distinct steps. First, by means of the Empirical Mode Decomposition (EMD), the time-series is "de-constructed" into a finite number - often small - of zero-mean components that have distinct temporal scales of variability, termed hereinafter the Intrinsic Mode Functions (IMFs). The signal model of the components is an amplitude modulation - frequency modulation (AM - FM) one, and can also be thought of as an extension of a Fourier series having both time varying amplitude and frequency. Following the decomposition, Hilbert spectral analysis is then employed on the IMFs, yielding a time-frequency-energy representation that portrays changes in the spectral contents of the original data, with respect to time. As measurements of surface solar irradiance may possibly be contaminated by the manifestation of different type of stochastic processes (i.e. noise), the identification of real, physical processes from this background of random fluctuations is of interest. To this end, an adaptive background noise null hypothesis is assumed, based on the robust statistical properties of the EMD when applied to time-series of different classes of noise (e.g. white, red or fractional Gaussian). Since the algorithm acts as an efficient constant-Q dyadic, "wavelet-like", filter bank, the different noise inputs are decomposed into components having the same spectral shape, but that are translated to the next lower octave in the spectral domain. Thus, when the sampling step is increased, the spectral shape of IMFs cannot remain at its original position, due to the new lower Nyquist frequency, and is instead pushed toward the lower scaled frequency. Based on these features, the identification of potential signals within the data should become possible without any prior knowledge of the background noises. When applying the above outlined procedure to decennial time-series of surface solar irradiance, only the component that has an annual time-scale of variability is shown to have statistical properties that diverge from those of noise. Nevertheless, the noise-like components are not completely devoid of information, as it is found that their AM components have a non-null rank correlation coefficient with the annual mode, i.e. the background noise intensity seems to be modulated by the seasonal cycle. The findings have possible implications on the modelling and forecast of the surface solar irradiance, by discriminating its deterministic from its quasi-stochastic constituents, at distinct local time-scales.
ERIC Educational Resources Information Center
Andretta, James R.; Worrell, Frank C.; Mello, Zena R.
2014-01-01
Using cluster analysis of Adolescent Time Attitude Scale (ATAS) scores in a sample of 300 adolescents ("M" age = 16 years; "SD" = 1.25; 60% male; 41% European American; 25.3% Asian American; 11% African American; 10.3% Latino), the authors identified five time attitude profiles based on positive and negative attitudes toward…
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
Direct Characterization of Ultrafast Energy-Time Entangled Photon Pairs.
MacLean, Jean-Philippe W; Donohue, John M; Resch, Kevin J
2018-02-02
Energy-time entangled photons are critical in many quantum optical phenomena and have emerged as important elements in quantum information protocols. Entanglement in this degree of freedom often manifests itself on ultrafast time scales, making it very difficult to detect, whether one employs direct or interferometric techniques, as photon-counting detectors have insufficient time resolution. Here, we implement ultrafast photon counters based on nonlinear interactions and strong femtosecond laser pulses to probe energy-time entanglement in this important regime. Using this technique and single-photon spectrometers, we characterize all the spectral and temporal correlations of two entangled photons with femtosecond resolution. This enables the witnessing of energy-time entanglement using uncertainty relations and the direct observation of nonlocal dispersion cancellation on ultrafast time scales. These techniques are essential to understand and control the energy-time degree of freedom of light for ultrafast quantum optics.
Enabling Large-Scale IoT-Based Services through Elastic Publish/Subscribe.
Vavassori, Sergio; Soriano, Javier; Fernández, Rafael
2017-09-19
In this paper, we report an algorithm that is designed to leverage the cloud as infrastructure to support Internet of Things (IoT) by elastically scaling in/out so that IoT-based service users never stop receiving sensors' data. This algorithm is able to provide an uninterrupted service to end users even during the scaling operation since its internal state repartitioning is transparent for publishers or subscribers; its scaling operation is time-bounded and depends only on the dimension of the state partitions to be transmitted to the different nodes. We describe its implementation in E-SilboPS, an elastic content-based publish/subscribe (CBPS) system specifically designed to support context-aware sensing and communication in IoT-based services. E-SilboPS is a key internal asset of the FIWARE IoT services enablement platform, which offers an architecture of components specifically designed to capture data from, or act upon, IoT devices as easily as reading/changing the value of attributes linked to context entities. In addition, we discuss the quantitative measurements used to evaluate the scale-out process, as well as the results of this evaluation. This new feature rounds out the context-aware content-based features of E-SilboPS by providing, for example, the necessary middleware for constructing dashboards and monitoring panels that are capable of dynamically changing queries and continuously handling data in IoT-based services.
Enabling Large-Scale IoT-Based Services through Elastic Publish/Subscribe
2017-01-01
In this paper, we report an algorithm that is designed to leverage the cloud as infrastructure to support Internet of Things (IoT) by elastically scaling in/out so that IoT-based service users never stop receiving sensors’ data. This algorithm is able to provide an uninterrupted service to end users even during the scaling operation since its internal state repartitioning is transparent for publishers or subscribers; its scaling operation is time-bounded and depends only on the dimension of the state partitions to be transmitted to the different nodes. We describe its implementation in E-SilboPS, an elastic content-based publish/subscribe (CBPS) system specifically designed to support context-aware sensing and communication in IoT-based services. E-SilboPS is a key internal asset of the FIWARE IoT services enablement platform, which offers an architecture of components specifically designed to capture data from, or act upon, IoT devices as easily as reading/changing the value of attributes linked to context entities. In addition, we discuss the quantitative measurements used to evaluate the scale-out process, as well as the results of this evaluation. This new feature rounds out the context-aware content-based features of E-SilboPS by providing, for example, the necessary middleware for constructing dashboards and monitoring panels that are capable of dynamically changing queries and continuously handling data in IoT-based services. PMID:28925967
Memory Maintenance in Synapses with Calcium-Based Plasticity in the Presence of Background Activity
Higgins, David; Graupner, Michael; Brunel, Nicolas
2014-01-01
Most models of learning and memory assume that memories are maintained in neuronal circuits by persistent synaptic modifications induced by specific patterns of pre- and postsynaptic activity. For this scenario to be viable, synaptic modifications must survive the ubiquitous ongoing activity present in neural circuits in vivo. In this paper, we investigate the time scales of memory maintenance in a calcium-based synaptic plasticity model that has been shown recently to be able to fit different experimental data-sets from hippocampal and neocortical preparations. We find that in the presence of background activity on the order of 1 Hz parameters that fit pyramidal layer 5 neocortical data lead to a very fast decay of synaptic efficacy, with time scales of minutes. We then identify two ways in which this memory time scale can be extended: (i) the extracellular calcium concentration in the experiments used to fit the model are larger than estimated concentrations in vivo. Lowering extracellular calcium concentration to in vivo levels leads to an increase in memory time scales of several orders of magnitude; (ii) adding a bistability mechanism so that each synapse has two stable states at sufficiently low background activity leads to a further boost in memory time scale, since memory decay is no longer described by an exponential decay from an initial state, but by an escape from a potential well. We argue that both features are expected to be present in synapses in vivo. These results are obtained first in a single synapse connecting two independent Poisson neurons, and then in simulations of a large network of excitatory and inhibitory integrate-and-fire neurons. Our results emphasise the need for studying plasticity at physiological extracellular calcium concentration, and highlight the role of synaptic bi- or multistability in the stability of learned synaptic structures. PMID:25275319
NASA Astrophysics Data System (ADS)
Anquetin, Sandrine; Vannier, Olivier; Ollagnier, Mélody; Braud, Isabelle
2015-04-01
This work contributes to the evaluation of the dynamics of the human exposure during flash-flood events in the Mediterranean region. Understanding why and how the commuters modify their daily mobility in the Cévennes - Vivarais area (France) is the long-term objective of the study. To reach this objective, the methodology relies on three steps: i) evaluation of daily travel patterns, ii) reconstitution of road flooding events in the region based on hydrological simulation at regional scale in order to capture the time evolution and the intensity of flood and iii) identification of the daily fluctuation of the exposition according to road flooding scenarios and the time evolution of mobility patterns. This work deals with the second step. To do that, the physically based and non-calibrated hydrological model CVN (Vannier, 2013) is implemented to retrieve the hydrological signature of past flash-flood events in Southern France. Four past events are analyzed (September 2002; September 2005 (split in 2 different events); October 2008). Since the regional scale is investigated, the scales of the studied catchments range from few km2 to few hundreds of km2 where many catchments are ungauged. The evaluation is based on a multi-scale approach using complementary observations coming from post-flood experiments (for small and/or ungaugged catchments) and operational hydrological network (for larger catchments). The scales of risk (time and location of the road flooding) are also compared to observed data of road cuts. The discussion aims at improving our understanding on the hydrological processes associated with road flooding vulnerability. We specifically analyze runoff coefficient and the ratio between surface and groundwater flows at regional scale. The results show that on the overall, the three regional simulations provide good scores for the probability of detection and false alarms concerning road flooding (1600 points are analyzed for the whole region). Our evaluation procedure provides new insights on the active hydrological processes at small scales (catchments area < 10 km²) since these small scales, distributed over the whole region, are analyzed through road cuts data and post-flood field investigations. As shown in Vannier (2013), the signature of the altered geological layer is significant on the simulated discharges. For catchments under schisty geology, the simulated discharge, whatever the catchment size, is usually overestimated. Vannier, O, 2013, Apport de la modélisation hydrologique régionale à la compréhension des processus de crue en zone méditerranéenne, PhD-Thesis (in French), Grenoble University.
Woodbury, Allan D.; Rubin, Yoram
2000-01-01
A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.
Extreme-volatility dynamics in crude oil markets
NASA Astrophysics Data System (ADS)
Jiang, Xiong-Fei; Zheng, Bo; Qiu, Tian; Ren, Fei
2017-02-01
Based on concepts and methods from statistical physics, we investigate extreme-volatility dynamics in the crude oil markets, using the high-frequency data from 2006 to 2010 and the daily data from 1986 to 2016. The dynamic relaxation of extreme volatilities is described by a power law, whose exponents usually depend on the magnitude of extreme volatilities. In particular, the relaxation before and after extreme volatilities is time-reversal symmetric at the high-frequency time scale, but time-reversal asymmetric at the daily time scale. This time-reversal asymmetry is mainly induced by exogenous events. However, the dynamic relaxation after exogenous events exhibits the same characteristics as that after endogenous events. An interacting herding model both with and without exogenous driving forces could qualitatively describe the extreme-volatility dynamics.
Multifractality Signatures in Quasars Time Series. I. 3C 273
NASA Astrophysics Data System (ADS)
Belete, A. Bewketu; Bravo, J. P.; Canto Martins, B. L.; Leão, I. C.; De Araujo, J. M.; De Medeiros, J. R.
2018-05-01
The presence of multifractality in a time series shows different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. The identification of a multifractal nature allows for a characterization of the dynamics and of the intermittency of the fluctuations in non-linear and complex systems. In this study, we search for a possible multifractal structure (multifractality signature) of the flux variability in the quasar 3C 273 time series for all electromagnetic wavebands at different observation points, and the origins for the observed multifractality. This study is intended to highlight how the scaling behaves across the different bands of the selected candidate which can be used as an additional new technique to group quasars based on the fractal signature observed in their time series and determine whether quasars are non-linear physical systems or not. The Multifractal Detrended Moving Average algorithm (MFDMA) has been used to study the scaling in non-linear, complex and dynamic systems. To achieve this goal, we applied the backward (θ = 0) MFDMA method for one-dimensional signals. We observe weak multifractal (close to monofractal) behaviour in some of the time series of our candidate except in the mm, UV and X-ray bands. The non-linear temporal correlation is the main source of the observed multifractality in the time series whereas the heaviness of the distribution contributes less.
White, Mark; Butterworth, Tony; Wells, John S G
2017-10-01
To explore the experiences of participants involved in the implementation of the Productive Ward: Releasing Time to Care™ initiative in Ireland, identifying key implementation lessons. A large-scale quality improvement programme Productive Ward: Releasing Time to Care™ was introduced nationwide into Ireland in 2011. We captured accounts from ward-based teams in an implementation phase during 2013-14 to explore their experiences. Semi-structured, in-depth interviews with a purposive sample of 24 members of ward-based teams from nine sites involved in the second national phase of the initiative were conducted. Interviews were analysed and coded under themes, using a seven-stage iterative process. The predominant theme identified was associated with the implementation and management of the initiative and included: project management; training; preparation; information and communication; and participant's negative experiences. The most prominent challenge reported related to other competing clinical priorities. Despite the structured approach of Productive Ward: Releasing Time to Care™, it appears that overstretched and busy clinical environments struggle to provide the right climate and context for ward-based teams to engage and interact actively with quality improvement tools, methods and activities. Findings highlight five key aspects of implementation and management that will help facilitate successful adoption of large-scale, ward-based quality improvement programmes such as Productive Ward: Releasing Time to Care™. Utilising pre-existing implementation or quality frameworks to assess each ward/unit for 'readiness' prior to commencing a quality improvement intervention such as Productive Ward: Releasing Time to Care™ should be considered. © 2017 John Wiley & Sons Ltd.
Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo
2012-02-08
Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1987-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1989-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
Development of multiscale complexity and multifractality of fetal heart rate variability.
Gierałtowski, Jan; Hoyer, Dirk; Tetschke, Florian; Nowack, Samuel; Schneider, Uwe; Zebrowski, Jan
2013-11-01
During fetal development a complex system grows and coordination over multiple time scales is formed towards an integrated behavior of the organism. Since essential cardiovascular and associated coordination is mediated by the autonomic nervous system (ANS) and the ANS activity is reflected in recordable heart rate patterns, multiscale heart rate analysis is a tool predestined for the diagnosis of prenatal maturation. The analyses over multiple time scales requires sufficiently long data sets while the recordings of fetal heart rate as well as the behavioral states studied are themselves short. Care must be taken that the analysis methods used are appropriate for short data lengths. We investigated multiscale entropy and multifractal scaling exponents from 30 minute recordings of 27 normal fetuses, aged between 23 and 38 weeks of gestational age (WGA) during the quiet state. In multiscale entropy, we found complexity lower than that of non-correlated white noise over all 20 coarse graining time scales investigated. Significant maturation age related complexity increase was strongest expressed at scale 2, both using sample entropy and generalized mutual information as complexity estimates. Multiscale multifractal analysis (MMA) in which the Hurst surface h(q,s) is calculated, where q is the multifractal parameter and s is the scale, was applied to the fetal heart rate data. MMA is a method derived from detrended fluctuation analysis (DFA). We modified the base algorithm of MMA to be applicable for short time series analysis using overlapping data windows and a reduction of the scale range. We looked for such q and s for which the Hurst exponent h(q,s) is most correlated with gestational age. We used this value of the Hurst exponent to predict the gestational age based only on fetal heart rate variability properties. Comparison with the true age of the fetus gave satisfying results (error 2.17±3.29 weeks; p<0.001; R(2)=0.52). In addition, we found that the normally used DFA scale range is non-optimal for fetal age evaluation. We conclude that 30 min recordings are appropriate and sufficient for assessing fetal age by multiscale entropy and multiscale multifractal analysis. The predominant prognostic role of scale 2 heart beats for MSE and scale 39 heart beats (at q=-0.7) for MMA cannot be explored neither by single scale complexity measures nor by standard detrended fluctuation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Heinz, R; Wolf, H; Schuchmann, H; End, L; Kolter, K
2000-05-01
In spite of the wealth of experience available in the pharmaceutical industry, tablet formulations are still largely developed on an empirical basis, and the scale-up from laboratory to production is a time-consuming and costly process. Using Ludipress greatly simplifies formulation development and the manufacturing process because only the active ingredient Ludipress and a lubricant need to be mixed briefly before being compressed into tablets. The studies described here were designed to investigate the scale-up of Ludipress-based formulations from laboratory to production scale, and to predict changes in tablet properties due to changes in format, compaction pressure, and the use of different tablet presses. It was found that the tensile strength of tablets made of Ludipress increased linearly with compaction pressures up to 300 MPa. It was also independent of the geometry of the tablets (diameter, thickness, shape). It is therefore possible to give an equation with which the compaction pressure required to achieve a given hardness can be calculated for a given tablet form. The equation has to be modified slightly to convert from a single-punch press to a rotary tableting machine. Tablets produced in the rotary machine at the same pressure have a slightly higher tensile strength. The rate of increase in pressure, and therefore the throughput, has no effect on the tensile strength of Ludipress tablets. It is thought that a certain minimum dwell time is responsible for this difference. The production of tablets based on Ludipress can be scaled up from one rotary press to another without problem if the powder mixtures are prepared with the same mixing energy. The tensile strength curve determined for tablets made with Ludipress alone can also be applied to tablets with a small quantity (< 10%) of an active ingredient.
Relating the large-scale structure of time series and visibility networks.
Rodríguez, Miguel A
2017-06-01
The structure of time series is usually characterized by means of correlations. A new proposal based on visibility networks has been considered recently. Visibility networks are complex networks mapped from surfaces or time series using visibility properties. The structures of time series and visibility networks are closely related, as shown by means of fractional time series in recent works. In these works, a simple relationship between the Hurst exponent H of fractional time series and the exponent of the distribution of edges γ of the corresponding visibility network, which exhibits a power law, is shown. To check and generalize these results, in this paper we delve into this idea of connected structures by defining both structures more properly. In addition to the exponents used before, H and γ, which take into account local properties, we consider two more exponents that, as we will show, characterize global properties. These are the exponent α for time series, which gives the scaling of the variance with the size as var∼T^{2α}, and the exponent κ of their corresponding network, which gives the scaling of the averaged maximum of the number of edges, 〈k_{M}〉∼N^{κ}. With this representation, a more precise connection between the structures of general time series and their associated visibility network is achieved. Similarities and differences are more clearly established, and new scaling forms of complex networks appear in agreement with their respective classes of time series.
Jian, Yun; Silvestri, Sonia; Brown, Jeff; Hickman, Rick; Marani, Marco
2014-01-01
An improved understanding of mosquito population dynamics under natural environmental forcing requires adequate field observations spanning the full range of temporal scales over which mosquito abundance fluctuates in natural conditions. Here we analyze a 9-year daily time series of uninterrupted observations of adult mosquito abundance for multiple mosquito species in North Carolina to identify characteristic scales of temporal variability, the processes generating them, and the representativeness of observations at different sampling resolutions. We focus in particular on Aedes vexans and Culiseta melanura and, using a combination of spectral analysis and modeling, we find significant population fluctuations with characteristic periodicity between 2 days and several years. Population dynamical modelling suggests that the observed fast fluctuations scales (2 days-weeks) are importantly affected by a varying mosquito activity in response to rapid changes in meteorological conditions, a process neglected in most representations of mosquito population dynamics. We further suggest that the range of time scales over which adult mosquito population variability takes place can be divided into three main parts. At small time scales (indicatively 2 days-1 month) observed population fluctuations are mainly driven by behavioral responses to rapid changes in weather conditions. At intermediate scales (1 to several month) environmentally-forced fluctuations in generation times, mortality rates, and density dependence determine the population characteristic response times. At longer scales (annual to multi-annual) mosquito populations follow seasonal and inter-annual environmental changes. We conclude that observations of adult mosquito populations should be based on a sub-weekly sampling frequency and that predictive models of mosquito abundance must include behavioral dynamics to separate the effects of a varying mosquito activity from actual changes in the abundance of the underlying population.
Temporal Clustering of Regional-Scale Extreme Precipitation Events in Southern Switzerland
NASA Astrophysics Data System (ADS)
Barton, Yannick; Giannakaki, Paraskevi; Von Waldow, Harald; Chevalier, Clément; Pfhal, Stephan; Martius, Olivia
2017-04-01
Temporal clustering of extreme precipitation events on subseasonal time scales is a form of compound extremes and is of crucial importance for the formation of large-scale flood events. Here, the temporal clustering of regional-scale extreme precipitation events in southern Switzerland is studied. These precipitation events are relevant for the flooding of lakes in southern Switzerland and northern Italy. This research determines whether temporal clustering is present and then identifies the dynamics that are responsible for the clustering. An observation-based gridded precipitation dataset of Swiss daily rainfall sums and ECMWF reanalysis datasets are used. To analyze the clustering in the precipitation time series a modified version of Ripley's K function is used. It determines the average number of extreme events in a time period, to characterize temporal clustering on subseasonal time scales and to determine the statistical significance of the clustering. Significant clustering of regional-scale precipitation extremes is found on subseasonal time scales during the fall season. Four high-impact clustering episodes are then selected and the dynamics responsible for the clustering are examined. During the four clustering episodes, all heavy precipitation events were associated with an upperlevel breaking Rossby wave over western Europe and in most cases strong diabatic processes upstream over the Atlantic played a role in the amplification of these breaking waves. Atmospheric blocking downstream over eastern Europe supported this wave breaking during two of the clustering episodes. During one of the clustering periods, several extratropical transitions of tropical cyclones in the Atlantic contributed to the formation of high-amplitude ridges over the Atlantic basin and downstream wave breaking. During another event, blocking over Alaska assisted the phase locking of the Rossby waves downstream over the Atlantic.
NASA Astrophysics Data System (ADS)
Suberlak, Krzysztof; Ivezić, Željko; MacLeod, Chelsea L.; Graham, Matthew; Sesar, Branimir
2017-12-01
We present an improved photometric error analysis for the 7 100 CRTS (Catalina Real-Time Transient Survey) optical light curves for quasars from the SDSS (Sloan Digital Sky Survey) Stripe 82 catalogue. The SDSS imaging survey has provided a time-resolved photometric data set, which greatly improved our understanding of the quasar optical continuum variability: Data for monthly and longer time-scales are consistent with a damped random walk (DRW). Recently, newer data obtained by CRTS provided puzzling evidence for enhanced variability, compared to SDSS results, on monthly time-scales. Quantitatively, SDSS results predict about 0.06 mag root-mean-square (rms) variability for monthly time-scales, while CRTS data show about a factor of 2 larger rms, for spectroscopically confirmed SDSS quasars. Our analysis has successfully resolved this discrepancy as due to slightly underestimated photometric uncertainties from the CRTS image processing pipelines. As a result, the correction for observational noise is too small and the implied quasar variability is too large. The CRTS photometric error correction factors, derived from detailed analysis of non-variable SDSS standard stars that were re-observed by CRTS, are about 20-30 per cent, and result in reconciling quasar variability behaviour implied by the CRTS data with earlier SDSS results. An additional analysis based on independent light curve data for the same objects obtained by the Palomar Transient Factory provides further support for this conclusion. In summary, the quasar variability constraints on weekly and monthly time-scales from SDSS, CRTS and PTF surveys are mutually compatible, as well as consistent with DRW model.
NASA Astrophysics Data System (ADS)
Lien, F. S.; Yee, E.; Ji, H.; Keats, A.; Hsieh, K. J.
2006-06-01
The release of chemical, biological, radiological, or nuclear (CBRN) agents by terrorists or rogue states in a North American city (densely populated urban centre) and the subsequent exposure, deposition and contamination are emerging threats in an uncertain world. The modeling of the transport, dispersion, deposition and fate of a CBRN agent released in an urban environment is an extremely complex problem that encompasses potentially multiple space and time scales. The availability of high-fidelity, time-dependent models for the prediction of a CBRN agent's movement and fate in a complex urban environment can provide the strongest technical and scientific foundation for support of Canada's more broadly based effort at advancing counter-terrorism planning and operational capabilities.The objective of this paper is to report the progress of developing and validating an integrated, state-of-the-art, high-fidelity multi-scale, multi-physics modeling system for the accurate and efficient prediction of urban flow and dispersion of CBRN (and other toxic) materials discharged into these flows. Development of this proposed multi-scale modeling system will provide the real-time modeling and simulation tool required to predict injuries, casualties and contamination and to make relevant decisions (based on the strongest technical and scientific foundations) in order to minimize the consequences of a CBRN incident in a populated centre.
Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Akao, Yoshinori; Higashikawa, Yoshiyasu
2017-10-01
The time-resolved luminescence spectra and the lifetimes of eighteen black writing inks were measured to differentiate pen ink on altered documents. The spectra and lifetimes depended on the samples. About half of the samples only exhibited short-lived luminescence components on the nanosecond time scale. On the other hand, the other samples exhibited short- and long-lived components on the microsecond time scale. The samples could be classified into fifteen groups based on the luminescence spectra and dynamics. Therefore, luminescence lifetime can be used for the differentiation of writing inks, and luminescence lifetime imaging can be applied for the examination of altered documents. Copyright © 2017 Elsevier B.V. All rights reserved.
Scale-up of industrial biodiesel production to 40 m(3) using a liquid lipase formulation.
Price, Jason; Nordblad, Mathias; Martel, Hannah H; Chrabas, Brent; Wang, Huali; Nielsen, Per Munk; Woodley, John M
2016-08-01
In this work, we demonstrate the scale-up from an 80 L fed-batch scale to 40 m(3) along with the design of a 4 m(3) continuous process for enzymatic biodiesel production catalyzed by NS-40116 (a liquid formulation of a modified Thermomyces lanuginosus lipase). Based on the analysis of actual pilot plant data for the transesterification of used cooking oil and brown grease, we propose a method applying first order integral analysis to fed-batch data based on either the bound glycerol or free fatty acid content in the oil. This method greatly simplifies the modeling process and gives an indication of the effect of mixing at the various scales (80 L to 40 m(3) ) along with the prediction of the residence time needed to reach a desired conversion in a CSTR. Suitable process metrics reflecting commercial performance such as the reaction time, enzyme efficiency, and reactor productivity were evaluated for both the fed-batch and CSTR cases. Given similar operating conditions, the CSTR operation on average, has a reaction time which is 1.3 times greater than the fed-batch operation. We also showed how the process metrics can be used to quickly estimate the selling price of the enzyme. Assuming a biodiesel selling price of 0.6 USD/kg and a one-time use of the enzyme (0.1% (w/woil ) enzyme dosage); the enzyme can then be sold for 30 USD/kg which ensures that that the enzyme cost is not more than 5% of the biodiesel revenue. Biotechnol. Bioeng. 2016;113: 1719-1728. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Information Filtering via a Scaling-Based Function
Qiu, Tian; Zhang, Zi-Ke; Chen, Guang
2013-01-01
Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem. PMID:23696829
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
NASA Astrophysics Data System (ADS)
Hong, Zixuan; Bian, Fuling
2008-10-01
Geographic space, time space and cognition space are three fundamental and interrelated spaces in geographic information systems for transportation. However, the cognition space and its relationships to the time space and geographic space are often neglected. This paper studies the relationships of these three spaces in urban transportation system from a new perspective and proposes a novel MDS-SOM transformation method which takes the advantages of the techniques of multidimensional scaling (MDS) and self-organizing map (SOM). The MDS-SOM transformation framework includes three kinds of mapping: the geographic-time transformation, the cognition-time transformation and the time-cognition transformation. The transformations in our research provide a better understanding of the interactions of these three spaces and beneficial knowledge is discovered to help the transportation analysis and decision supports.
Time-dependent Schrödinger equation for molecular core-hole dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picón, A.
2017-02-01
X-ray spectroscopy is an important tool for the investigation of matter. X rays primarily interact with inner-shell electrons, creating core (inner-shell) holes that will decay on the time scale of attoseconds to a few femtoseconds through electron relaxations involving the emission of a photon or an electron. Furthermore, the advent of femtosecond x-ray pulses expands x-ray spectroscopy to the time domain and will eventually allow the control of core-hole population on time scales comparable to core-vacancy lifetimes. For both cases, a theoretical approach that accounts for the x-ray interaction while the electron relaxations occur is required. We describe a time-dependentmore » framework, based on solving the time-dependent Schrödinger equation, that is suitable for describing the induced electron and nuclear dynamics.« less
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Preferential flow across scales: how important are plot scale processes for a catchment scale model?
NASA Astrophysics Data System (ADS)
Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian
2017-04-01
Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.
New watershed-based climate forecast products for hydrologists and water managers
NASA Astrophysics Data System (ADS)
Baker, S. A.; Wood, A.; Rajagopalan, B.; Lehner, F.; Peng, P.; Ray, A. J.; Barsugli, J. J.; Werner, K.
2017-12-01
Operational sub-seasonal to seasonal (S2S) climate predictions have advanced in skill in recent years but are yet to be broadly utilized by stakeholders in the water management sector. While some of the challenges that relate to fundamental predictability are difficult or impossible to surmount, other hurdles related to forecast product formulation, translation, relevance, and accessibility can be directly addressed. These include products being misaligned with users' space-time needs, products disseminated in formats users cannot easily process, and products based on raw model outputs that are biased relative to user climatologies. In each of these areas, more can be done to bridge the gap by enhancing the usability, quality, and relevance of water-oriented predictions. In addition, water stakeholder impacts can benefit from short-range extremes predictions (such as 2-3 day storms or 1-week heat waves) at S2S time-scales, for which few products exist. We present interim results of a Research to Operations (R2O) effort sponsored by the NOAA MAPP Climate Testbed to (1) formulate climate prediction products so as to reduce hurdles to in water stakeholder adoption, and to (2) explore opportunities for extremes prediction at S2S time scales. The project is currently using CFSv2 and National Multi-Model Ensemble (NMME) reforecasts and forecasts to develop real-time watershed-based climate forecast products, and to train post-processing approaches to enhance the skill and reliability of raw real-time S2S forecasts. Prototype S2S climate data products (forecasts and associated skill analyses) are now being operationally staged at NCAR on a public website to facilitate further product development through interactions with water managers. Initial demonstration products include CFSv2-based bi-weekly climate forecasts (weeks 1-2, 2-3, and 3-4) for sub-regional scale hydrologic units, and NMME-based monthly and seasonal prediction products. Raw model mean skill at these time-space resolutions for some periods (e.g., weeks 3-4) is unusably low, but for other periods, and for multi-month leads with NMME, precipitation and particularly temperature forecasts exhibit useful skill. Website: http://hydro.rap.ucar.edu/s2s/
Critical scales to explain urban hydrological response: an application in Cranbrook, London
NASA Astrophysics Data System (ADS)
Cristiano, Elena; ten Veldhuis, Marie-Claire; Gaitan, Santiago; Ochoa Rodriguez, Susana; van de Giesen, Nick
2018-04-01
Rainfall variability in space and time, in relation to catchment characteristics and model complexity, plays an important role in explaining the sensitivity of hydrological response in urban areas. In this work we present a new approach to classify rainfall variability in space and time and we use this classification to investigate rainfall aggregation effects on urban hydrological response. Nine rainfall events, measured with a dual polarimetric X-Band radar instrument at the CAESAR site (Cabauw Experimental Site for Atmospheric Research, NL), were aggregated in time and space in order to obtain different resolution combinations. The aim of this work was to investigate the influence that rainfall and catchment scales have on hydrological response in urban areas. Three dimensionless scaling factors were introduced to investigate the interactions between rainfall and catchment scale and rainfall input resolution in relation to the performance of the model. Results showed that (1) rainfall classification based on cluster identification well represents the storm core, (2) aggregation effects are stronger for rainfall than flow, (3) model complexity does not have a strong influence compared to catchment and rainfall scales for this case study, and (4) scaling factors allow the adequate rainfall resolution to be selected to obtain a given level of accuracy in the calculation of hydrological response.
Performance limitations of bilateral force reflection imposed by operator dynamic characteristics
NASA Technical Reports Server (NTRS)
Chapel, Jim D.
1989-01-01
A linearized, single-axis model is presented for bilateral force reflection which facilitates investigation into the effects of manipulator, operator, and task dynamics, as well as time delay and gain scaling. Structural similarities are noted between this model and impedance control. Stability results based upon this model impose requirements upon operator dynamic characteristics as functions of system time delay and environmental stiffness. An experimental characterization reveals the limited capabilities of the human operator to meet these requirements. A procedure is presented for determining the force reflection gain scaling required to provide stability and acceptable operator workload. This procedure is applied to a system with dynamics typical of a space manipulator, and the required gain scaling is presented as a function of environmental stiffness.
NASA Technical Reports Server (NTRS)
Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.
1987-01-01
Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales
He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.
2000-01-01
To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Adler, David; Peters-Lidard, Christa; Huffman, George
2012-01-01
It is well known that extreme or prolonged rainfall is the dominant trigger of landslides worldwide. While research has evaluated the spatiotemporal distribution of extreme rainfall and landslides at local or regional scales using in situ data, few studies have mapped rainfall-triggered landslide distribution globally due to the dearth of landslide data and consistent precipitation information. This study uses a newly developed Global Landslide Catalog (GLC) and a 13-year satellite-based precipitation record from TRMM data. For the first time, these two unique products provide the foundation to quantitatively evaluate the co-occurrence of precipitation and landslides globally. Evaluation of the GLC indicates that 2010 had a large number of high-impact landslide events relative to previous years. This study considers how variations in extreme and prolonged satellite-based rainfall are related to the distribution of landslides over the same time scales for three active landslide areas: Central America, the Himalayan Arc, and central-eastern China. Several test statistics confirm that TRMM rainfall generally scales with the observed increase in landslide reports and fatal events for 2010 and previous years over each region. These findings suggest that the co-occurrence of satellite precipitation and landslide reports may serve as a valuable indicator for characterizing the spatiotemporal distribution of landslide-prone areas in order to establish a global rainfall-triggered landslide climatology. This study characterizes the variability of satellite precipitation data and reported landslide activity at the globally scale in order to improve landslide cataloging, forecasting and quantify potential triggering sources at daily, monthly and yearly time scales.
Two-time scale subordination in physical processes with long-term memory
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2008-03-01
We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation.
NASA Astrophysics Data System (ADS)
Ma, Pengcheng; Li, Daye; Li, Shuo
2016-02-01
Using one minute high-frequency data of the Shanghai Composite Index (SHCI) and the Shenzhen Composite Index (SZCI) (2007-2008), we employ the detrended fluctuation analysis (DFA) and the detrended cross correlation analysis (DCCA) with rolling window approach to observe the evolution of market efficiency and cross-correlation in pre-crisis and crisis period. Considering the fat-tail distribution of return time series, statistical test based on shuffling method is conducted to verify the null hypothesis of no long-term dependence. Our empirical research displays three main findings. First Shanghai equity market efficiency deteriorated while Shenzhen equity market efficiency improved with the advent of financial crisis. Second the highly positive dependence between SHCI and SZCI varies with time scale. Third financial crisis saw a significant increase of dependence between SHCI and SZCI at shorter time scales but a lack of significant change at longer time scales, providing evidence of contagion and absence of interdependence during crisis.
NASA Astrophysics Data System (ADS)
Sigler, W. A.; Ewing, S. A.; Payn, R. A.; Jones, C. A.; Brookshire, J.; Klassen, J. K.; Jackson-Smith, D.; Weissmann, G. S.
2016-12-01
Shallow aquifers impaired by nitrate from agriculture are widespread and remediation or prevention of this problem requires understanding of N leaching rates at a variety of spatial scales. Characterization of the drivers of nitrate leaching at an intermediate scale (103 to 105 ha) is needed to bridge from field scale observations to the landscape-scale context, allowing informed water resource management decisions. Here we explore patterns in nitrate leaching rates across a depositional landform with a predominant land use of non-irrigated small grain production in the Northern Great Plains within the Upper Missouri Basin. The shallow Moccasin terrace (260,000 ha) aquifer is bounded in vertical extent by underlying shale and is isolated from mountain front stream recharge, such that aquifer recharge is dominated by infiltration of precipitation through agricultural soils. We leverage this simplified landform scale water balance to estimate leaching rates using groundwater nitrate concentrations and surface water discharge, and quantify uncertainty using a Monte Carlo approach based on spatial variation in groundwater nitrate concentrations. Landform-scale nitrate-N leaching rates ranged between 10 and 24 kg ha-1 yr-1 during 2012-2014 across two terrace catchments. These rates represent 11 to 27% of fertilizer application rates but are likely derived from a combination of soil organic N mineralization and direct fertilizer loss. While groundwater apparent age is relatively young (0-5 y) based on tritium-helium analysis, whole-aquifer turnover time calculations are an order of magnitude longer (20-23 y), suggesting aquifer heterogeneity and thus a longer potential response time to management changes than suggested by tracer-based aging. We collaborated with local producers to undertake this work, and discussed our results with community members throughout the study. Based on a follow-up survey, producers are now more likely to consider nitrate leaching when making management decisions, suggesting that location-specific producer engagement can facilitate practical solutions to non-point source water quality issues.
NASA Astrophysics Data System (ADS)
Sobolev, Stephan; Muldashev, Iskander
2016-04-01
The key achievement of the geodynamic modelling community greatly contributed by the work of Evgenii Burov and his students is application of "realistic" mineral-physics based non-linear rheological models to simulate deformation processes in crust and mantle. Subduction being a type example of such process is an essentially multi-scale phenomenon with the time-scales spanning from geological to earthquake scale with the seismic cycle in-between. In this study we test the possibility to simulate the entire subduction process from rupture (1 min) to geological time (Mln yr) with the single cross-scale thermomechanical model that employs elasticity, mineral-physics constrained non-linear transient viscous rheology and rate-and-state friction plasticity. First we generate a thermo-mechanical model of subduction zone at geological time-scale including a narrow subduction channel with "wet-quartz" visco-elasto-plastic rheology and low static friction. We next introduce in the same model classic rate-and state friction law in subduction channel, leading to stick-slip instability. This model generates spontaneous earthquake sequence. In order to follow in details deformation process during the entire seismic cycle and multiple seismic cycles we use adaptive time-step algorithm changing step from 40 sec during the earthquake to minute-5 year during postseismic and interseismic processes. We observe many interesting deformation patterns and demonstrate that contrary to the conventional ideas, this model predicts that postseismic deformation is controlled by visco-elastic relaxation in the mantle wedge already since hour to day after the great (M>9) earthquakes. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-4year time range.
The Transition Region Explorer: Observing the Multi-Scale Dynamics of Geospace
NASA Astrophysics Data System (ADS)
Donovan, E.
2015-12-01
Meso- and global-scale IT remote sensing is accomplished via satellite imagers and ground-based instruments. On the ground, the approach is arrays providing extensive as possible coverage (the "net") and powerful observatories that drill deep to provide detailed information about small-scale processes (the "drill"). Always, there is a trade between cost, spatial resolution, coverage (extent), number of parameters, and more, such that in general the larger the network the sparser the coverage. Where are we now? There are important gaps. With THEMIS-ASI, we see processes that quickly evolve beyond the field of view of one observatory, but involve space/time scales not captured by existing meso- and large-scale arrays. Many forefront questions require observations at heretofore unexplored space and time scales, and comprehensive inter-hemispheric conjugate observations than are presently available. To address this, a new ground-based observing initiative is being developed in Canada. Called TREx, for Transition Region Explorer, this new facility will incorporate dedicated blueline, redline, and Near-Infrared All-Sky Imagers, together with an unprecedented network of ten imaging riometers, with a combined field of view spanning more than three hours of magnetic local time and from equatorward to poleward of typical auroral latitudes (spanning the ionospheric footprint of the "nightside transition region" that separates the highly stretched tail and the inner magnetosphere). The TREx field-of-view is covered by HF radars, and contains a dense network of magnetometers and VLF receivers, as well as other geospace and upper atmospheric remote sensors. Taken together, TREx and these co-located instruments represent a quantum leap forward in terms of imaging, in multiple parameters (precipitation, ionization, convection, and currents), ionospheric dynamics in the above-mentioned scale gap. This represents an exciting new opportunity for studying geospace at the system level, especially for using the aurora to remote sense magnetospheric plasma physics and dynamics, and comes with a set of Big Data challenges that are going to be exciting. One such challenge is the development of a fundamentally new type of data product, namely time series of multi-parameter, geospatially referenced 'data cubes'.
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
An investigation of turbulent transport in the extreme lower atmosphere
NASA Technical Reports Server (NTRS)
Koper, C. A., Jr.; Sadeh, W. Z.
1975-01-01
A model in which the Lagrangian autocorrelation is expressed by a domain integral over a set of usual Eulerian autocorrelations acquired concurrently at all points within a turbulence box is proposed along with a method for ascertaining the statistical stationarity of turbulent velocity by creating an equivalent ensemble to investigate the flow in the extreme lower atmosphere. Simultaneous measurements of turbulent velocity on a turbulence line along the wake axis were carried out utilizing a longitudinal array of five hot-wire anemometers remotely operated. The stationarity test revealed that the turbulent velocity is approximated as a realization of a weakly self-stationary random process. Based on the Lagrangian autocorrelation it is found that: (1) large diffusion time predominated; (2) ratios of Lagrangian to Eulerian time and spatial scales were smaller than unity; and, (3) short and long diffusion time scales and diffusion spatial scales were constrained within their Eulerian counterparts.
The clinical evaluation of platelet-rich plasma on free gingival graft's donor site wound healing.
Samani, Mahmoud Khosravi; Saberi, Bardia Vadiati; Ali Tabatabaei, S M; Moghadam, Mahdjoube Goldani
2017-01-01
It has been proved that platelet-rich plasma (PRP) can promote wound healing. In this way, PRP can be advantageous in periodontal plastic surgeries, free gingival graft (FGG) being one such surgery. In this randomized split-mouth controlled trial, 10 patients who needed bilateral FGG were selected, and two donor sites were randomly assigned to experience either natural healing or healing-assisted with PRP. The outcome was assessed based on the comparison of the extent of wound closure, Manchester scale, Landry healing scale, visual analog scale, and tissue thickness between the study groups at different time intervals. Repeated measurements of analysis of variance and paired t -test were used. Statistical significance was P ≤ 0.05. Significant differences between the study groups and also across different time intervals were seen in all parameters except for the changes in tissue thickness. PRP accelerates the healing process of wounds and reduces the healing time.
Percolation transport theory and relevance to soil formation, vegetation growth, and productivity
NASA Astrophysics Data System (ADS)
Hunt, A. G.; Ghanbarian, B.
2016-12-01
Scaling laws of percolation theory have been applied to generate the time dependence of vegetation growth rates (both intensively managed and natural) and soil formation rates. The soil depth is thus equal to the solute vertical transport distance, the soil production function, chemical weathering rates, and C and N storage rates are all given by the time derivative of the soil depth. Approximate numerical coefficients based on the maximum flow rates in soils have been proposed, leading to a broad understanding of such processes. What is now required is an accurate understanding of the variability of the coefficients in the scaling relationships. The present abstract focuses on the scaling relationship for solute transport and soil formation. A soil formation rate relates length, x, and time, t, scales, meaning that the missing coefficient must include information about fundamental space and time scales, x0 and t0. x0 is proposed to be a fundamental mineral heterogeneity scale, i.e. a median particle diameter. to is then found from the ratio of x0 and a fundamental flow rate, v0, which is identified with the net infiltration rate. The net infiltration rate is equal to precipitation P less evapotranspiration, ET, plus run-on less run-off. Using this hypothesis, it is possible to predict soil depths and formation rates as functions of time and P - ET, and the formation rate as a function of depth, soil calcic and gypsic horizon depths as functions of P-ET. It is also possible to determine when soils are in equilibrium, and predict relationships of erosion rates and soil formation rates.
Phylogenetic community structure: temporal variation in fish assemblage
Santorelli, Sergio; Magnusson, William; Ferreira, Efrem; Caramaschi, Erica; Zuanon, Jansen; Amadio, Sidnéia
2014-01-01
Hypotheses about phylogenetic relationships among species allow inferences about the mechanisms that affect species coexistence. Nevertheless, most studies assume that phylogenetic patterns identified are stable over time. We used data on monthly samples of fish from a single lake over 10 years to show that the structure in phylogenetic assemblages varies over time and conclusions depend heavily on the time scale investigated. The data set was organized in guild structures and temporal scales (grouped at three temporal scales). Phylogenetic distance was measured as the mean pairwise distances (MPD) and as mean nearest-neighbor distance (MNTD). Both distances were based on counts of nodes. We compared the observed values of MPD and MNTD with values that were generated randomly using null model independent swap. A serial runs test was used to assess the temporal independence of indices over time. The phylogenetic pattern in the whole assemblage and the functional groups varied widely over time. Conclusions about phylogenetic clustering or dispersion depended on the temporal scales. Conclusions about the frequency with which biotic processes and environmental filters affect the local assembly do not depend only on taxonomic grouping and spatial scales. While these analyzes allow the assertion that all proposed patterns apply to the fish assemblages in the floodplain, the assessment of the relative importance of these processes, and how they vary depending on the temporal scale and functional group studied, cannot be determined with the effort commonly used. It appears that, at least in the system that we studied, the assemblages are forming and breaking continuously, resulting in various phylogeny-related structures that makes summarizing difficult. PMID:25360256
Sekelj, Alen; Đanić, Davorin
2017-09-01
Lyme borreliosis is a vector-borne infectious disease characterized by three disease stages. In the areas endemic for borreliosis, every acute facial palsy indicates serologic testing and implies specific approach to the disease. Th e aim of the study was to identify and confirm the value of acoustic refl ex and House-Brackman (HB) grading scale as prognostic indicators of facial palsy in neuroborreliosis. Th e study included 176 patients with acute facial palsy divided into three groups based on serologic testing: borreliosis, Bell's palsy, and facial palsy caused by herpes simplex virus type 1 (HSV-1). Study patients underwent baseline audiometry with tympanometry and acoustic reflex, whereas current state of facial palsy was assessed by the HB scale. Subsequently, the same tests were obtained on three occasions, i.e. in week 3, 6 and 12 of presentation. Th e patients diagnosed with borreliosis, Bell's palsy and HSV-1 differed according to the time to acoustic refl ex recovery, which took longest time in patients with borreliosis. Th ese patients had the highest percentage of suprastapedial lesions at all time points and recovery was achieved later as compared with the other two diagnoses. Th e mean score on the HB scale declined with time, also at a slower rate in borreliosis patients. Th e prognosis of acoustic refl ex and facial palsy recovery according to HB scale was not associated with the length of elapsed time. The results obtained in the present study strongly confirmed the role of acoustic reflex and HB grading scale as prognostic indicators of facial palsy in neuroborreliosis.
Satellite-based Flood Modeling Using TRMM-based Rainfall Products
Harris, Amanda; Rahman, Sayma; Hossain, Faisal; Yarborough, Lance; Bagtzoglou, Amvrossios C.; Easson, Greg
2007-01-01
Increasingly available and a virtually uninterrupted supply of satellite-estimated rainfall data is gradually becoming a cost-effective source of input for flood prediction under a variety of circumstances. However, most real-time and quasi-global satellite rainfall products are currently available at spatial scales ranging from 0.25° to 0.50° and hence, are considered somewhat coarse for dynamic hydrologic modeling of basin-scale flood events. This study assesses the question: what are the hydrologic implications of uncertainty of satellite rainfall data at the coarse scale? We investigated this question on the 970 km2 Upper Cumberland river basin of Kentucky. The satellite rainfall product assessed was NASA's Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) product called 3B41RT that is available in pseudo real time with a latency of 6-10 hours. We observed that bias adjustment of satellite rainfall data can improve application in flood prediction to some extent with the trade-off of more false alarms in peak flow. However, a more rational and regime-based adjustment procedure needs to be identified before the use of satellite data can be institutionalized among flood modelers. PMID:28903302
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
The Reward-Based Eating Drive Scale: A Self-Report Index of Reward-Based Eating
Mason, Ashley E.; Laraia, Barbara A.; Hartman, William; Ready, Karen; Acree, Michael; Adam, Tanja C.; St. Jeor, Sachiko; Kessler, David
2014-01-01
Why are some individuals more vulnerable to persistent weight gain and obesity than are others? Some obese individuals report factors that drive overeating, including lack of control, lack of satiation, and preoccupation with food, which may stem from reward-related neural circuitry. These are normative and common symptoms and not the sole focus of any existing measures. Many eating scales capture these common behaviors, but are confounded with aspects of dysregulated eating such as binge eating or emotional overeating. Across five studies, we developed items that capture this reward-based eating drive (RED). Study 1 developed the items in lean to obese individuals (n = 327) and examined changes in weight over eight years. In Study 2, the scale was further developed and expert raters evaluated the set of items. Study 3 tested psychometric properties of the final 9 items in 400 participants. Study 4 examined psychometric properties and race invariance (n = 80 women). Study 5 examined psychometric properties and age/gender invariance (n = 381). Results showed that RED scores correlated with BMI and predicted earlier onset of obesity, greater weight fluctuations, and greater overall weight gain over eight years. Expert ratings of RED scale items indicated that the items reflected characteristics of reward-based eating. The RED scale evidenced high internal consistency and invariance across demographic factors. The RED scale, designed to tap vulnerability to reward-based eating behavior, appears to be a useful brief tool for identifying those at higher risk of weight gain over time. Given the heterogeneity of obesity, unique brief profiling of the reward-based aspect of obesity using a self-report instrument such as the RED scale may be critical for customizing effective treatments in the general population. PMID:24979216
The scale-dependent market trend: Empirical evidences using the lagged DFA method
NASA Astrophysics Data System (ADS)
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Reaching extended length-scales with accelerated dynamics
NASA Astrophysics Data System (ADS)
Hubartt, Bradley; Shim, Yunsic; Amar, Jacques
2012-02-01
While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).
NASA Astrophysics Data System (ADS)
Wang, Yonggang; Liu, Chong
2016-10-01
The common solution for a field programmable gate array (FPGA)-based time-to-digital converter (TDC) is constructing a tapped delay line (TDL) for time interpolation to yield a sub-clock time resolution. The granularity and uniformity of the delay elements of TDL determine the TDC time resolution. In this paper, we propose a dual-sampling TDL architecture and a bin decimation method that could make the delay elements as small and uniform as possible, so that the implemented TDCs can achieve a high time resolution beyond the intrinsic cell delay. Two identical full hardware-based TDCs were implemented in a Xilinx UltraScale FPGA for performance evaluation. For fixed time intervals in the range from 0 to 440 ns, the average time-interval RMS resolution is measured by the two TDCs with 4.2 ps, thus the timestamp resolution of single TDC is derived as 2.97 ps. The maximum hit rate of the TDC is as high as half the system clock rate of FPGA, namely 250 MHz in our demo prototype. Because the conventional online bin-by-bin calibration is not needed, the implementation of the proposed TDC is straightforward and relatively resource-saving.
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
Inferring multi-scale neural mechanisms with brain network modelling
Schirner, Michael; McIntosh, Anthony Randal; Jirsa, Viktor; Deco, Gustavo
2018-01-01
The neurophysiological processes underlying non-invasive brain activity measurements are incompletely understood. Here, we developed a connectome-based brain network model that integrates individual structural and functional data with neural population dynamics to support multi-scale neurophysiological inference. Simulated populations were linked by structural connectivity and, as a novelty, driven by electroencephalography (EEG) source activity. Simulations not only predicted subjects' individual resting-state functional magnetic resonance imaging (fMRI) time series and spatial network topologies over 20 minutes of activity, but more importantly, they also revealed precise neurophysiological mechanisms that underlie and link six empirical observations from different scales and modalities: (1) resting-state fMRI oscillations, (2) functional connectivity networks, (3) excitation-inhibition balance, (4, 5) inverse relationships between α-rhythms, spike-firing and fMRI on short and long time scales, and (6) fMRI power-law scaling. These findings underscore the potential of this new modelling framework for general inference and integration of neurophysiological knowledge to complement empirical studies. PMID:29308767
Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors
Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal
2014-01-01
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782
NASA Astrophysics Data System (ADS)
Gray, A. B.
2017-12-01
Watersheds with sufficient monitoring data have been predominantly found to display nonstationary suspended sediment dynamics, whereby the relationship between suspended sediment concentration and discharge changes over time. Despite the importance of suspended sediment as a keystone of geophysical and biochemical processes, and as a primary mediator of water quality, stationary behavior remains largely assumed in the context of these applications. This study presents an investigation into the time dependent behavior of small mountainous rivers draining the coastal ranges of the western continental US over interannual to interdecadal time scales. Of the 250+ small coastal (drainage area < 2x104 km2) watersheds in this region, only 23 have discharge associated suspended sediment concentration time series with base periods of 10 years or more. Event to interdecadal scale nonstationary suspended sediment dynamics were identified throughout these systems. Temporal patterns of non-stationary behavior provided some evidence for spatial coherence, which may be related to synoptic hydro-metrological patterns and regional scale changes in land use patterns. However, the results also highlight the complex, integrative nature of watershed scale fluvial suspended sediment dynamics. This underscores the need for in-depth, forensic approaches for initial processes identification, which require long term, high resolution monitoring efforts in order to adequately inform management. The societal implications of nonstationary sediment dynamics and their controls were further explored through the case of California, USA, where over 150 impairment listings have resulted in more than 50 sediment TMDLs, only 3 of which are flux based - none of which account for non-stationary behavior.
Hysteresis, regime shifts, and non-stationarity in aquifer recharge-storage-discharge systems
NASA Astrophysics Data System (ADS)
Klammler, Harald; Jawitz, James; Annable, Michael; Hatfield, Kirk; Rao, Suresh
2016-04-01
Based on physical principles and geological information we develop a parsimonious aquifer model for Silver Springs, one of the largest karst springs in Florida. The model structure is linear and time-invariant with recharge, aquifer head (storage) and spring discharge as dynamic variables at the springshed (landscape) scale. Aquifer recharge is the hydrological driver with trends over a range of time scales from seasonal to multi-decadal. The freshwater-saltwater interaction is considered as a dynamic storage mechanism. Model results and observed time series show that aquifer storage causes significant rate-dependent hysteretic behavior between aquifer recharge and discharge. This leads to variable discharge per unit recharge over time scales up to decades, which may be interpreted as a gradual and cyclic regime shift in the aquifer drainage behavior. Based on field observations, we further amend the aquifer model by assuming vegetation growth in the spring run to be inversely proportional to stream velocity and to hinder stream flow. This simple modification introduces non-linearity into the dynamic system, for which we investigate the occurrence of rate-independent hysteresis and of different possible steady states with respective regime shifts between them. Results may contribute towards explaining observed non-stationary behavior potentially due to hydrological regime shifts (e.g., triggered by gradual, long-term changes in recharge or single extreme events) or long-term hysteresis (e.g., caused by aquifer storage). This improved understanding of the springshed hydrologic response dynamics is fundamental for managing the ecological, economic and social aspects at the landscape scale.
Scaling properties of Polish rain series
NASA Astrophysics Data System (ADS)
Licznar, P.
2009-04-01
Scaling properties as well as multifractal nature of precipitation time series have not been studied for local Polish conditions until recently due to lack of long series of high-resolution data. The first Polish study of precipitation time series scaling phenomena was made on the base of pluviograph data from the Wroclaw University of Environmental and Life Sciences meteorological station located at the south-western part of the country. The 38 annual rainfall records from years 1962-2004 were converted into digital format and transformed into a standard format of 5-minute time series. The scaling properties and multifractal character of this material were studied by means of several different techniques: power spectral density analysis, functional box-counting, probability distribution/multiple scaling and trace moment methods. The result proved the general scaling character of time series at the range of time scales ranging form 5 minutes up to at least 24 hours. At the same time some characteristic breaks at scaling behavior were recognized. It is believed that the breaks were artificial and arising from the pluviograph rain gauge measuring precision limitations. Especially strong limitations at the precision of low-intensity precipitations recording by pluviograph rain gauge were found to be the main reason for artificial break at energy spectra, as was reported by other authors before. The analysis of co-dimension and moments scaling functions showed the signs of the first-order multifractal phase transition. Such behavior is typical for dressed multifractal processes that are observed by spatial or temporal averaging on scales larger than the inner-scale of those processes. The fractal dimension of rainfall process support derived from codimension and moments scaling functions geometry analysis was found to be 0.45. The same fractal dimension estimated by means of the functional box-counting method was equal to 0.58. At the final part of the study implementation of double trace moment method allowed for estimation of local universal multifractal rainfall parameters (α=0.69; C1=0.34; H=-0.01). The research proved the fractal character of rainfall process support and multifractal character of the rainfall intensity values variability among analyzed time series. It is believed that scaling of local Wroclaw's rainfalls for timescales at the range from 24 hours up to 5 minutes opens the door for future research concerning for example random cascades implementation for daily precipitation totals disaggregation for smaller time intervals. The results of such a random cascades functioning in a form of 5 minute artificial rainfall scenarios could be of great practical usability for needs of urban hydrology, and design and hydrodynamic modeling of storm water and combined sewage conveyance systems.
Clustering biomolecular complexes by residue contacts similarity.
Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2012-07-01
Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.
Predicting oxidation-limited lifetime of thin-walled components of NiCrW alloy 230
Duan, R.; Jalowicka, Aleksandra; Unocic, Kinga A.; ...
2016-10-18
Using alloy 230 as an example, a generalized oxidation lifetime model for chromia-forming Ni-base wrought alloys is proposed, which captures the most important damaging oxidation effects relevant for component design: wall thickness loss, scale spallation, and the occurrence of breakaway oxidation. For deriving input parameters and for verification of the model approach, alloy 230 specimens with different thicknesses were exposed for different times at temperatures in the range 950–1050 °C in static air. The studies focused on thin specimens (0.2–0.5 mm) to obtain data for critical subscale depletion processes resulting in breakaway oxidation within reasonably achievable test times up tomore » 3000 h. The oxidation kinetics and oxidation-induced subscale microstructural changes were determined by combining gravimetric data with results from scanning electron microscopy with energy dispersive X-ray spectroscopy. The modeling of the scale spallation and re-formation was based on the NASA cyclic oxidation spallation program, while a new model was developed to describe accelerated oxidation occurring after longer exposure times in the thinnest specimens. The calculated oxidation data were combined with the reservoir model equation, by means of which the relation between the consumption and the remaining concentration of Cr in the alloy was established as a function of temperature and specimen thickness. Based on this approach, a generalized lifetime diagram is proposed, in which wall thickness loss is plotted as a function of time, initial specimen thickness, and temperature. As a result, the time to reach a critical Cr level at the scale/alloy interface of 10 wt% is also indicated in the diagrams.« less
Predicting oxidation-limited lifetime of thin-walled components of NiCrW alloy 230
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, R.; Jalowicka, Aleksandra; Unocic, Kinga A.
Using alloy 230 as an example, a generalized oxidation lifetime model for chromia-forming Ni-base wrought alloys is proposed, which captures the most important damaging oxidation effects relevant for component design: wall thickness loss, scale spallation, and the occurrence of breakaway oxidation. For deriving input parameters and for verification of the model approach, alloy 230 specimens with different thicknesses were exposed for different times at temperatures in the range 950–1050 °C in static air. The studies focused on thin specimens (0.2–0.5 mm) to obtain data for critical subscale depletion processes resulting in breakaway oxidation within reasonably achievable test times up tomore » 3000 h. The oxidation kinetics and oxidation-induced subscale microstructural changes were determined by combining gravimetric data with results from scanning electron microscopy with energy dispersive X-ray spectroscopy. The modeling of the scale spallation and re-formation was based on the NASA cyclic oxidation spallation program, while a new model was developed to describe accelerated oxidation occurring after longer exposure times in the thinnest specimens. The calculated oxidation data were combined with the reservoir model equation, by means of which the relation between the consumption and the remaining concentration of Cr in the alloy was established as a function of temperature and specimen thickness. Based on this approach, a generalized lifetime diagram is proposed, in which wall thickness loss is plotted as a function of time, initial specimen thickness, and temperature. As a result, the time to reach a critical Cr level at the scale/alloy interface of 10 wt% is also indicated in the diagrams.« less
Scale size-dependent characteristics of the nightside aurora
NASA Astrophysics Data System (ADS)
Humberset, B. K.; Gjerloev, J. W.; Samara, M.; Michell, R. G.
2017-02-01
We have determined the spatiotemporal characteristics of the magnetosphere-ionosphere (M-I) coupling using auroral imaging. Observations at fixed positions for an extended period of time are provided by a ground-based all-sky imager measuring the 557.7 nm auroral emissions. We report on a single event of nightside aurora (˜22 magnetic local time) preceding a substorm onset. To determine the spatiotemporal characteristics, we perform an innovative analysis of an all-sky imager movie (19 min duration, images at 3.31 Hz) that combines a two-dimensional spatial fast Fourier transform with a temporal correlation. We find a scale size-dependent variability where the largest scale sizes are stable on timescales of minutes while the small scale sizes are more variable. When comparing two smaller time intervals of different types of auroral displays, we find a variation in their characteristics. The characteristics averaged over the event are in remarkable agreement with the spatiotemporal characteristics of the nightside field-aligned currents during moderately disturbed times. Thus, two different electrodynamical parameters of the M-I coupling show similar behavior. This gives independent support to the claim of a system behavior that uses repeatable solutions to transfer energy and momentum from the magnetosphere to the ionosphere.
Strategic Planning Tools for Large-Scale Technology-Based Assessments
ERIC Educational Resources Information Center
Koomen, Marten; Zoanetti, Nathan
2018-01-01
Education systems are increasingly being called upon to implement new technology-based assessment systems that generate efficiencies, better meet changing stakeholder expectations, or fulfil new assessment purposes. These assessment systems require coordinated organisational effort to implement and can be expensive in time, skill and other…
How the propagation of heat-flux modulations triggers E × B flow pattern formation.
Kosuga, Y; Diamond, P H; Gürcan, O D
2013-03-08
We propose a novel mechanism to describe E×B flow pattern formation based upon the dynamics of propagation of heat-flux modulations. The E × B flows of interest are staircases, which are quasiregular patterns of strong, localized shear layers and profile corrugations interspersed between regions of avalanching. An analogy of staircase formation to jam formation in traffic flow is used to develop an extended model of heat avalanche dynamics. The extension includes a flux response time, during which the instantaneous heat flux relaxes to the mean heat flux, determined by symmetry constraints. The response time introduced here is the counterpart of the drivers' response time in traffic, during which drivers adjust their speed to match the background traffic flow. The finite response time causes the growth of mesoscale temperature perturbations, which evolve to form profile corrugations. The length scale associated with the maximum growth rate scales as Δ(2) ~ (v(thi)/λT(i))ρ(i)sqrt[χ(neo)τ], where λT(i) is a typical heat pulse speed, χ(neo) is the neoclassical thermal diffusivity, and τ is the response time of the heat flux. The connection between the scale length Δ(2) and the staircase interstep scale is discussed.
A model for AGN variability on multiple time-scales
NASA Astrophysics Data System (ADS)
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
Carbonate scale deactivating the biocathode in a microbial fuel cell
NASA Astrophysics Data System (ADS)
Santini, M.; Marzorati, S.; Fest-Santini, S.; Trasatti, S.; Cristiani, P.
2017-07-01
The development and the following inactivation of a carbon-based biocathode in single chamber and membraneless MFCs was investigated in this work. The electrochemical behavior of the biocathode has been analyzed over time during the MFC life. X-Ray Micro-Computed Tomographies (microCTs) have been carried out at progressive stages, documenting the building over time of a layer of scale deposition becoming thicker and thicker up to the cathode inactivation. The technique provides cross-sectional (tomographic) grayscale images and 3D reconstruction of volumes. Lighter color indicates lower X-ray attenuation (i.e., lower atomic density) thus allowing distinguishing biofilm from inorganic fouling on the basis of the average atomic number Z of each voxel (3D pixel). MicroCT was combined with Scanning Electron Microscopy (SEM) and Energy-Dispersive X-Ray Spectroscopy (EDX) in order to qualitatively recognize chemical species in each different layer of the cathode's section. Results correlated the presence of biofilm and calcium carbonate deposits, prevalently in the inner part of the cathode, with the produced electric current over time. A specific microCT-related software quantified the time-dependent carbonate scale deposition, identifying a correlation between the decreasing performances of the device and the increasing quantity of scale deposition that penetrates the cathode cross section in time.
Multi-scaling modelling in financial markets
NASA Astrophysics Data System (ADS)
Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.
2007-12-01
In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.
Scaling laws and vortex profiles in two-dimensional decaying turbulence.
Laval, J P; Chavanis, P H; Dubrulle, B; Sire, C
2001-06-01
We use high resolution numerical simulations over several hundred of turnover times to study the influence of small scale dissipation onto vortex statistics in 2D decaying turbulence. A scaling regime is detected when the scaling laws are expressed in units of mean vorticity and integral scale, like predicted in Carnevale et al., Phys. Rev. Lett. 66, 2735 (1991), and it is observed that viscous effects spoil this scaling regime. The exponent controlling the decay of the number of vortices shows some trends toward xi=1, in agreement with a recent theory based on the Kirchhoff model [C. Sire and P. H. Chavanis, Phys. Rev. E 61, 6644 (2000)]. In terms of scaled variables, the vortices have a similar profile with a functional form related to the Fermi-Dirac distribution.
Propagation of Intra-Seasonal Tropical Oscillations (PISTON)
NASA Astrophysics Data System (ADS)
Moum, J. N.
2017-12-01
During monsoon season over the South China Sea and Philippines, weather varies on the subseasonal time scale. Disturbances of the "boreal summer intraseasonal oscillation" (BSISO) move north and east across the region over periods of weeks. These disturbances are strongly conditioned by the complex geography of the region. The diurnal cycle in convection over islands and adjacent coastal seas is strong. Air-sea interaction is modulated by ocean stratification and local circulation patterns that are themselves complex and diurnally varying. The multiple pathways and space-time scales in the regional ocean-atmosphere-land system make prediction on subseasonal to seasonal time scales challenging. The PISTON field campaign targets the west coast of Luzon in August/September 2018. It includes ship-based, moored and land-based measurements, a significant modeling effort and coordinates with the Philippine SALICA program (Sea Air Land Interactions in the Context of Archipelagos) and the aircraft-based, NASA-funded CAMP2EX campaign (Cloud and Aerosol Monsoonal Processes-Philippines Experiment). The diurnal cycle and its interaction with the BSISO are primary targets for PISTON. Key questions are: how heat is stored and released in the upper ocean on intraseasonal time scales; how that heat storage interacts with atmospheric convection; and what role it plays in BSISO maintenance and propagation. Key processes include land-sea breezes, orographic influence on convection, river discharge to coastal oceans, gravity waves, diurnal warm layers, internal tides, and a buoyancy-driven northward coastal current. As intraseasonal disturbances approach the region, the presence of islands, with their low surface heat capacity, mountains, inhomogeneous distribution of urban/vegetation/soil, and strong diurnal cycle disrupts the air-sea heat exchange that sustains the BSISO over the ocean, confounding prediction models in which these processes are inadequately represented. Along with upscale influences, PISTON seeks to advance our understanding of how large scale atmospheric circulation variability over the South China Sea, related to the monsoon, BSISO, and convectively coupled waves, modifies the local diurnal cycle, synoptic systems, and air sea interaction in coastal regions and nearby open seas.
NASA Astrophysics Data System (ADS)
De Michelis, Paola; Federica Marcucci, Maria; Consolini, Giuseppe
2015-04-01
Recently we have investigated the spatial distribution of the scaling features of short-time scale magnetic field fluctuations using measurements from several ground-based geomagnetic observatories distributed in the northern hemisphere. We have found that the scaling features of fluctuations of the horizontal magnetic field component at time scales below 100 minutes are correlated with the geomagnetic activity level and with changes in the currents flowing in the ionosphere. Here, we present a detailed analysis of the dynamical changes of the magnetic field scaling features as a function of the geomagnetic activity level during the well-known large geomagnetic storm occurred on July, 15, 2000 (the Bastille event). The observed dynamical changes are discussed in relationship with the changes of the overall ionospheric polar convection and potential structure as reconstructed using SuperDARN data. This work is supported by the Italian National Program for Antarctic Research (PNRA) - Research Project 2013/AC3.08 and by the European Community's Seventh Framework Programme ([FP7/2007-2013]) under Grant no. 313038/STORM and
NASA Astrophysics Data System (ADS)
Hale, V. Cody; McDonnell, Jeffrey J.
2016-02-01
The effect of bedrock permeability and underlying catchment boundaries on stream base flow mean transit time (MTT) and MTT scaling relationships in headwater catchments is poorly understood. Here we examine the effect of bedrock permeability on MTT and MTT scaling relations by comparing 15 nested research catchments in western Oregon; half within the HJ Andrews Experimental Forest and half at the site of the Alsea Watershed Study. The two sites share remarkably similar vegetation, topography, and climate and differ only in bedrock permeability (one poorly permeable volcanic rock and the other more permeable sandstone). We found longer MTTs in the catchments with more permeable fractured and weathered sandstone bedrock than in the catchments with tight, volcanic bedrock (on average, 6.2 versus 1.8 years, respectively). At the permeable bedrock site, 67% of the variance in MTT across catchments scales was explained by drainage area, with no significant correlation to topographic characteristics. The poorly permeable site had opposite scaling relations, where MTT showed no correlation to drainage area but the ratio of median flow path length to median flow path gradient explained 91% of the variance in MTT across seven catchment scales. Despite these differences, hydrometric analyses, including flow duration and recession analysis, and storm response analysis, show that the two sites share relatively indistinguishable hydrodynamic behavior. These results show that similar catchment forms and hydrologic regimes hide different subsurface routing, storage, and scaling behavior—a major issue if only hydrometric data are used to define hydrological similarity for assessing land use or climate change response.
NASA Astrophysics Data System (ADS)
Gouveia, C. M.; Trigo, R. M.; Beguería, S.; Vicente-Serrano, S. M.
2017-04-01
The present work analyzes the drought impacts on vegetation over the entire Mediterranean basin, with the purpose of determining the vegetation communities, regions and seasons at which vegetation is driven by drought. Our approach is based on the use of remote sensing data and a multi-scalar drought index. Correlation maps between fields of monthly Normalized Difference Vegetation Index (NDVI) and the Standardized Precipitation-Evapotranspiration Index (SPEI) at different time scales (1-24 months) were computed for representative months of winter (Feb), spring (May), summer (Aug) and fall (Nov). Results for the period from 1982 to 2006 show large areas highly controlled by drought, although presenting high spatial and seasonal differences, with a maximum influence in August and a minimum in February. The highest correlation values are observed in February for 3 months' time scale and in May for 6 and 12 months. The higher control of drought on vegetation in February and May is obtained mainly over the drier vegetation communities (Mediterranean Dry and Desertic) at shorter time scales (3 to 9 months). Additionally, in February the impact of drought on vegetation is lower for Temperate Oceanic and Continental vegetation types and takes place at longer time scales (18-24). The dependence of drought time-scale response with water balance, as obtained through a simple difference between precipitation and reference evapotranspiration, varies with vegetation communities. During February and November low water balance values correspond to shorter time scales over dry vegetation communities, whereas high water balance values implies longer time scales over Temperate Oceanic and Continental areas. The strong control of drought on vegetation observed for Mediterranean Dry and Desertic vegetation types located over areas with high negative values of water balance emphasizes the need for an early warning drought system covering the entire Mediterranean basin. We are confident that these results will provide a useful tool for drought management plans and play a relevant role in mitigating the impact of drought episodes.
Treecode-based generalized Born method
NASA Astrophysics Data System (ADS)
Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao
2011-02-01
We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.
1996-04-01
time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real
Time as a dimension of the sample design in national-scale forest inventories
Francis Roesch; Paul Van Deusen
2013-01-01
Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
On the Assessment of Global Terrestrial Reference Frame Temporal Variations
NASA Astrophysics Data System (ADS)
Ampatzidis, Dimitrios; Koenig, Rolf; Zhu, Shengyuan
2015-04-01
Global Terrestrial Reference Frames (GTRFs) as the International Terrestrial Reference Frame (ITRF) provide reliable 4-D position information (3-D coordinates and their evolution through time). The given 3-D velocities play a significant role in precise position acquisition and are estimated from long term coordinate time series from the space-geodetic techniques DORIS, GNSS, SLR, and VLBI. GTRFs temporal evolution is directly connected with their internal stability: The more intense and inhomogeneous velocity field, the less stable TRF is derived. The assessment of the quality of the GTRF is mainly realized by comparing it to each individual technique's reference frame. E.g the comparison of GTRFs to SLR-only based TRF gives the sense of the ITRF stability with respect to the Geocenter and scale and their associated rates respectively. In addition, the comparison of ITRF to the VLBI-only based TRF can be used for the scale validation. However, till now there is not any specified methodology for the total assessment (in terms of origin, orientation and scale respectively) of the temporal evolution and GTRFs associated accuracy. We present a new alternative diagnostic tool for the assessment of GTRFs temporal evolution based on the well-known time-dependent Helmert type transformation formula (three shifts, three rotations and scale rates respectively). The advantage of the new methodology relies on the fact that it uses the full velocity field of the TRF and therefore all points not just the ones common to different techniques. It also examines simultaneously rates of origin, orientation and scale. The methodology is presented and implemented to the two existing GTRFs on the market (ITRF and DTRF which is computed from DGFI) , the results are discussed. The results also allow to compare directly each GTRF dynamic behavior. Furthermore, the correlations of the estimated parameters can also provide useful information to the proposed GTRFs assessment scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Bernhard W.; Mane, Anil U.; Elam, Jeffrey W.
X-ray detectors that combine two-dimensional spatial resolution with a high time resolution are needed in numerous applications of synchrotron radiation. Most detectors with this combination of capabilities are based on semiconductor technology and are therefore limited in size. Furthermore, the time resolution is often realised through rapid time-gating of the acquisition, followed by a slower readout. Here, a detector technology is realised based on relatively inexpensive microchannel plates that uses GHz waveform sampling for a millimeter-scale spatial resolution and better than 100 ps time resolution. The technology is capable of continuous streaming of time- and location-tagged events at rates greatermore » than 10 7events per cm 2. Time-gating can be used for improved dynamic range.« less
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
Knight, James C.; Tully, Philip J.; Kaplan, Bernhard A.; Lansner, Anders; Furber, Steve B.
2016-01-01
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. PMID:27092061
NASA Astrophysics Data System (ADS)
Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.
2012-07-01
SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.
Physical controls and predictability of stream hyporheic flow evaluated with a multiscale model
Stonedahl, Susa H.; Harvey, Judson W.; Detty, Joel; Aubeneau, Antoine; Packman, Aaron I.
2012-01-01
Improved predictions of hyporheic exchange based on easily measured physical variables are needed to improve assessment of solute transport and reaction processes in watersheds. Here we compare physically based model predictions for an Indiana stream with stream tracer results interpreted using the Transient Storage Model (TSM). We parameterized the physically based, Multiscale Model (MSM) of stream-groundwater interactions with measured stream planform and discharge, stream velocity, streambed hydraulic conductivity and porosity, and topography of the streambed at distinct spatial scales (i.e., ripple, bar, and reach scales). We predicted hyporheic exchange fluxes and hyporheic residence times using the MSM. A Continuous Time Random Walk (CTRW) model was used to convert the MSM output into predictions of in stream solute transport, which we compared with field observations and TSM parameters obtained by fitting solute transport data. MSM simulations indicated that surface-subsurface exchange through smaller topographic features such as ripples was much faster than exchange through larger topographic features such as bars. However, hyporheic exchange varies nonlinearly with groundwater discharge owing to interactions between flows induced at different topographic scales. MSM simulations showed that groundwater discharge significantly decreased both the volume of water entering the subsurface and the time it spent in the subsurface. The MSM also characterized longer timescales of exchange than were observed by the tracer-injection approach. The tracer data, and corresponding TSM fits, were limited by tracer measurement sensitivity and uncertainty in estimates of background tracer concentrations. Our results indicate that rates and patterns of hyporheic exchange are strongly influenced by a continuum of surface-subsurface hydrologic interactions over a wide range of spatial and temporal scales rather than discrete processes.
Towards Data-Driven Simulations of Wildfire Spread using Ensemble-based Data Assimilation
NASA Astrophysics Data System (ADS)
Rochoux, M. C.; Bart, J.; Ricci, S. M.; Cuenot, B.; Trouvé, A.; Duchaine, F.; Morel, T.
2012-12-01
Real-time predictions of a propagating wildfire remain a challenging task because the problem involves both multi-physics and multi-scales. The propagation speed of wildfires, also called the rate of spread (ROS), is indeed determined by complex interactions between pyrolysis, combustion and flow dynamics, atmospheric dynamics occurring at vegetation, topographical and meteorological scales. Current operational fire spread models are mainly based on a semi-empirical parameterization of the ROS in terms of vegetation, topographical and meteorological properties. For the fire spread simulation to be predictive and compatible with operational applications, the uncertainty on the ROS model should be reduced. As recent progress made in remote sensing technology provides new ways to monitor the fire front position, a promising approach to overcome the difficulties found in wildfire spread simulations is to integrate fire modeling and fire sensing technologies using data assimilation (DA). For this purpose we have developed a prototype data-driven wildfire spread simulator in order to provide optimal estimates of poorly known model parameters [*]. The data-driven simulation capability is adapted for more realistic wildfire spread : it considers a regional-scale fire spread model that is informed by observations of the fire front location. An Ensemble Kalman Filter algorithm (EnKF) based on a parallel computing platform (OpenPALM) was implemented in order to perform a multi-parameter sequential estimation where wind magnitude and direction are in addition to vegetation properties (see attached figure). The EnKF algorithm shows its good ability to track a small-scale grassland fire experiment and ensures a good accounting for the sensitivity of the simulation outcomes to the control parameters. As a conclusion, it was shown that data assimilation is a promising approach to more accurately forecast time-varying wildfire spread conditions as new airborne-like observations of the fire front location get available. [*] Rochoux, M.C., Delmotte, B., Cuenot, B., Ricci, S., and Trouvé, A. (2012) "Regional-scale simulations of wildland fire spread informed by real-time flame front observations", Proc. Combust. Inst., 34, in press http://dx.doi.org/10.1016/j.proci.2012.06.090 EnKF-based tracking of small-scale grassland fire experiment, with estimation of wind and fuel parameters.
Large-scale weather dynamics during the 2015 haze event in Singapore
NASA Astrophysics Data System (ADS)
Djamil, Yudha; Lee, Wen-Chien; Tien Dat, Pham; Kuwata, Mikinori
2017-04-01
The 2015 haze event in South East Asia is widely considered as a period of the worst air quality in the region in more than a decade. The source of the haze was from forest and peatland fire in Sumatra and Kalimantan Islands, Indonesia. The fires were mostly came from the practice of forest clearance known as slash and burn, to be converted to palm oil plantation. Such practice of clearance although occurs seasonally but at 2015 it became worst by the impact of strong El Nino. The long period of dryer atmosphere over the region due to El Nino makes the fire easier to ignite, spread and difficult to stop. The biomass emission from the forest and peatland fire caused large-scale haze pollution problem in both Islands and further spread into the neighboring countries such as Singapore and Malaysia. In Singapore, for about two months (September-October, 2015) the air quality was in the unhealthy level. Such unfortunate condition caused some socioeconomic losses such as school closure, cancellation of outdoor events, health issues and many more with total losses estimated as S700 million. The unhealthy level of Singapore's air quality is based on the increasing pollutant standard index (PSI>120) due to the haze arrival, it even reached a hazardous level (PSI= 300) for several days. PSI is a metric of air quality in Singapore that aggregate six pollutants (SO2, PM10, PM2.5, NO2, CO and O3). In this study, we focused on PSI variability in weekly-biweekly time scales (periodicity < 30 days) since it is the least understood compare to their diurnal and seasonal scales. We have identified three dominant time scales of PSI ( 5, 10 and 20 days) using Wavelet method and investigated their large-scale atmospheric structures. The PSI associated large-scale column moisture horizontal structures over the Indo-Pacific basin are dominated by easterly propagating gyres in synoptic (macro) scale for the 5 days ( 10 and 20 days) time scales. The propagating gyres manifest as cyclical column moisture flux trajectory around Singapore region. Some of its phases are identified to be responsible in transporting the haze from its source to Singapore. The haze source was identified by compositing number of hotspots in grid-space based on the three time scales of PSI. Further discussion about equatorial waves during the haze event will also be presented.
Time scale controversy: Accurate orbital calibration of the early Paleogene
NASA Astrophysics Data System (ADS)
Roehl, U.; Westerhold, T.; Laskar, J.
2012-12-01
Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.
Time scale controversy: Accurate orbital calibration of the early Paleogene
NASA Astrophysics Data System (ADS)
Westerhold, Thomas; RöHl, Ursula; Laskar, Jacques
2012-06-01
Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ˜54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.
Transcriptomics as a tool for assessing the scalability of mammalian cell perfusion systems.
Jayapal, Karthik P; Goudar, Chetan T
2014-01-01
DNA microarray-based transcriptomics have been used to determine the time course of laboratory and manufacturing-scale perfusion bioreactors in an attempt to characterize cell physiological state at these two bioreactor scales. Given the limited availability of genomic data for baby hamster kidney (BHK) cells, a Chinese hamster ovary (CHO)-based microarray was used following a feasibility assessment of cross-species hybridization. A heat shock experiment was performed using both BHK and CHO cells and resulting DNA microarray data were analyzed using a filtering criteria of perfect match (PM)/single base mismatch (MM) > 1.5 and PM-MM > 50 to exclude probes with low specificity or sensitivity for cross-species hybridizations. For BHK cells, 8910 probe sets (39 %) passed the cutoff criteria, whereas 12,961 probe sets (56 %) passed the cutoff criteria for CHO cells. Yet, the data from BHK cells allowed distinct clustering of heat shock and control samples as well as identification of biologically relevant genes as being differentially expressed, indicating the utility of cross-species hybridization. Subsequently, DNA microarray analysis was performed on time course samples from laboratory- and manufacturing-scale perfusion bioreactors that were operated under the same conditions. A majority of the variability (37 %) was associated with the first principal component (PC-1). Although PC-1 changed monotonically with culture duration, the trends were very similar in both the laboratory and manufacturing-scale bioreactors. Therefore, despite time-related changes to the cell physiological state, transcriptomic fingerprints were similar across the two bioreactor scales at any given instance in culture. Multiple genes were identified with time-course expression profiles that were very highly correlated (> 0.9) with bioprocess variables of interest. Although the current incomplete annotation limits the biological interpretation of these observations, their full potential may be realized in due course when richer genomic data become available. By taking a pragmatic approach of transcriptome fingerprinting, we have demonstrated the utility of systems biology to support the comparability of laboratory and manufacturing-scale perfusion systems. Scale-down model qualification is the first step in process characterization and hence is an integral component of robust regulatory filings. Augmenting the current paradigm, which relies primarily on cell culture and product quality information, with gene expression data can help make a substantially stronger case for similarity. With continued advances in systems biology approaches, we expect them to be seamlessly integrated into bioprocess development, which can translate into more robust and high yielding processes that can ultimately reduce cost of care for patients.
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
NASA Technical Reports Server (NTRS)
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
A Microcomputer Based Aircraft Flight Control System.
1980-04-01
time control of an aircraft using a microcomputer system . The applicability of two optimal control 5 1 theories--singular perturbation theory and output...increased controller execution time if implemented in software. This may be unavoidable if the plant is not stabilizable without feedback from such...From the real- time testing of the controller designs, it is seen that when dealing with systems possessing a two- time -scale property, output * * 61 K
Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huen, T.
1987-07-01
A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less
The Fusion Gain Analysis of the Inductively Driven Liner Compression Based Fusion
NASA Astrophysics Data System (ADS)
Shimazu, Akihisa; Slough, John
2016-10-01
An analytical analysis of the fusion gain expected in the inductively driven liner compression (IDLC) based fusion is conducted to identify the fusion gain scaling at various operating conditions. The fusion based on the IDLC is a magneto-inertial fusion concept, where a Field-Reversed Configuration (FRC) plasmoid is compressed via the inductively-driven metal liner to drive the FRC to fusion conditions. In the past, an approximate scaling law for the expected fusion gain for the IDLC based fusion was obtained under the key assumptions of (1) D-T fuel at 5-40 keV, (2) adiabatic scaling laws for the FRC dynamics, (3) FRC energy dominated by the pressure balance with the edge magnetic field at the peak compression, and (4) the liner dwell time being liner final diameter divided by the peak liner velocity. In this study, various assumptions made in the previous derivation is relaxed to study the change in the fusion gain scaling from the previous result of G ml1 / 2 El11 / 8 , where ml is the liner mass and El is the peak liner kinetic energy. The implication from the modified fusion gain scaling on the performance of the IDLC fusion reactor system is also explored.
Development of A Tsunami Magnitude Scale Based on DART Buoy Data
NASA Astrophysics Data System (ADS)
Leiva, J.; Polet, J.
2016-12-01
The quantification of tsunami energy has evolved through time, with a number of magnitude and intensity scales employed in the past century. Most of these scales rely on coastal measurements, which may be affected by complexities due to near-shore bathymetric effects and coastal geometries. Moreover, these datasets are generated by tsunami inundation, and thus cannot serve as a means of assessing potential tsunami impact prior to coastal arrival. With the introduction of a network of ocean buoys provided through the Deep-ocean Assessment and Reporting of Tsunamis (DART) project, a dataset has become available that can be exploited to further our current understanding of tsunamis and the earthquakes that excite them. The DART network consists of 39 stations that have produced estimates of sea-surface height as a function of time since 2003, and are able to detect deep ocean tsunami waves. Data collected at these buoys for the past decade reveals that at least nine major tsunami events, such as the 2011 Tohoku and 2013 Solomon Islands events, produced substantial wave amplitudes across a large distance range that can be implemented in a DART data based tsunami magnitude scale. We present preliminary results from the development of a tsunami magnitude scale that follows the methods used in the development of the local magnitude scale by Charles Richter. Analogous to the use of seismic ground motion amplitudes in the calculation of local magnitude, maximum ocean height displacements due to the passage of tsunami waves will be related to distance from the source in a least-squares exponential regression analysis. The regression produces attenuation curves based on the DART data, a site correction term, attenuation parameters, and an amplification factor. Initially, single event based regressions are used to constrain the attenuation parameters. Additional iterations use the parameters of these event-based fits as a starting point to obtain a stable solution, and include the calculation of station corrections, in order to obtain a final amplification factor for each event, which is used to calculate its tsunami magnitude.