Sample records for time scale approach

  1. Singular perturbation and time scale approaches in discrete control systems

    NASA Technical Reports Server (NTRS)

    Naidu, D. S.; Price, D. B.

    1988-01-01

    After considering a singularly perturbed discrete control system, a singular perturbation approach is used to obtain outer and correction subsystems. A time scale approach is then applied via block diagonalization transformations to decouple the system into slow and fast subsystems. To a zeroth-order approximation, the singular perturbation and time-scale approaches are found to yield equivalent results.

  2. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  3. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    PubMed

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  4. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  5. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  6. The theory of n-scales

    NASA Astrophysics Data System (ADS)

    Dündar, Furkan Semih

    2018-01-01

    We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.

  7. A full-Bayesian approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod experimental site

    USGS Publications Warehouse

    Woodbury, Allan D.; Rubin, Yoram

    2000-01-01

    A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.

  8. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  9. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  10. Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations.

    PubMed

    Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F; Spoerke, Erik David; Pan, Wei; Zuo, Jian Min

    2016-04-13

    Atomic-scale phenomena fundamentally influence materials form and function that makes the ability to locally probe and study these processes critical to advancing our understanding and development of materials. Atomic-scale chemical imaging by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) is a powerful approach to investigate solid crystal structures. Inefficient X-ray emission and collection, however, require long acquisition times (typically hundreds of seconds), making the technique incompatible with electron-beam sensitive materials and study of dynamic material phenomena. Here we describe an atomic-scale STEM-EDS chemical imaging technique that decreases the acquisition time to as little as one second, a reduction of more than 100 times. We demonstrate this new approach using LaAlO3 single crystal and study dynamic phase transformation in beam-sensitive Li[Li0.2Ni0.2Mn0.6]O2 (LNMO) lithium ion battery cathode material. By capturing a series of time-lapsed chemical maps, we show for the first time clear atomic-scale evidence of preferred Ni-mobility in LNMO transformation, revealing new kinetic mechanisms. These examples highlight the potential of this approach toward temporal, atomic-scale mapping of crystal structure and chemistry for investigating dynamic material phenomena.

  11. Hypothesis on the nature of time

    NASA Astrophysics Data System (ADS)

    Coumbe, D. N.

    2015-06-01

    We present numerical evidence that fictitious diffusing particles in the causal dynamical triangulation (CDT) approach to quantum gravity exceed the speed of light on small distance scales. We argue this superluminal behavior is responsible for the appearance of dimensional reduction in the spectral dimension. By axiomatically enforcing a scale invariant speed of light we show that time must dilate as a function of relative scale, just as it does as a function of relative velocity. By calculating the Hausdorff dimension of CDT diffusion paths we present a seemingly equivalent dual description in terms of a scale dependent Wick rotation of the metric. Such a modification to the nature of time may also have relevance for other approaches to quantum gravity.

  12. Finite Element Method (FEM) Modeling of Freeze-drying: Monitoring Pharmaceutical Product Robustness During Lyophilization.

    PubMed

    Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V

    2015-12-01

    Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.

  13. Permutation approach, high frequency trading and variety of micro patterns in financial time series

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Ebrahimian, Mehran; Tahmooresi, Hamed

    2014-11-01

    Permutation approach is suggested as a method to investigate financial time series in micro scales. The method is used to see how high frequency trading in recent years has affected the micro patterns which may be seen in financial time series. Tick to tick exchange rates are considered as examples. It is seen that variety of patterns evolve through time; and that the scale over which the target markets have no dominant patterns, have decreased steadily over time with the emergence of higher frequency trading.

  14. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  15. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  16. Two time scale output feedback regulation for ill-conditioned systems

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1986-01-01

    Issues pertaining to the well-posedness of a two time scale approach to the output feedback regulator design problem are examined. An approximate quadratic performance index which reflects a two time scale decomposition of the system dynamics is developed. It is shown that, under mild assumptions, minimization of this cost leads to feedback gains providing a second-order approximation of optimal full system performance. A simplified approach to two time scale feedback design is also developed, in which gains are separately calculated to stabilize the slow and fast subsystem models. By exploiting the notion of combined control and observation spillover suppression, conditions are derived assuring that these gains will stabilize the full-order system. A sequential numerical algorithm is described which obtains output feedback gains minimizing a broad class of performance indices, including the standard LQ case. It is shown that the algorithm converges to a local minimum under nonrestrictive assumptions. This procedure is adapted to and demonstrated for the two time scale design formulations.

  17. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  18. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  19. Oil price and exchange rate co-movements in Asian countries: Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Hussain, Muntazir; Zebende, Gilney Figueira; Bashir, Usman; Donghong, Ding

    2017-01-01

    Most empirical literature investigates the relation between oil prices and exchange rate through different models. These models measure this relationship on two time scales (long and short terms), and often fail to observe the co-movement of these variables at different time scales. We apply a detrended cross-correlation approach (DCCA) to investigate the co-movements of the oil price and exchange rate in 12 Asian countries. This model determines the co-movements of oil price and exchange rate at different time scale. The exchange rate and oil price time series indicate unit root problem. Their correlation and cross-correlation are very difficult to measure. The result becomes spurious when periodic trend or unit root problem occurs in these time series. This approach measures the possible cross-correlation at different time scale and controlling the unit root problem. Our empirical results support the co-movements of oil prices and exchange rate. Our results support a weak negative cross-correlation between oil price and exchange rate for most Asian countries included in our sample. The results have important monetary, fiscal, inflationary, and trade policy implications for these countries.

  20. Characterizing the performance of ecosystem models across time scales: A spectral analysis of the North American Carbon Program site-level synthesis

    Treesearch

    Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng

    2011-01-01

    Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...

  1. Broadband Structural Dynamics: Understanding the Impulse-Response of Structures Across Multiple Length and Time Scales

    DTIC Science & Technology

    2010-08-18

    Spectral domain response calculated • Time domain response obtained through inverse transform Approach 4: WASABI Wavelet Analysis of Structural Anomalies...differences at unity scale! Time Function Transform Apply Spectral Domain Transfer Function Time Function Inverse Transform Transform Transform  mtP

  2. Optimal Transport Destination for Ischemic Stroke Patients With Unknown Vessel Status: Use of Prehospital Triage Scores.

    PubMed

    Schlemm, Eckhard; Ebinger, Martin; Nolte, Christian H; Endres, Matthias; Schlemm, Ludwig

    2017-08-01

    Patients with acute ischemic stroke (AIS) and large vessel occlusion may benefit from direct transportation to an endovascular capable comprehensive stroke center (mothership approach) as opposed to direct transportation to the nearest stroke unit without endovascular therapy (drip and ship approach). The optimal transport strategy for patients with AIS and unknown vessel status is uncertain. The rapid arterial occlusion evaluation scale (RACE, scores ranging from 0 to 9, with higher scores indicating higher stroke severity) correlates with the National Institutes of Health Stroke Scale and was developed to identify patients with large vessel occlusion in a prehospital setting. We evaluate how the RACE scale can help to inform prehospital triage decisions for AIS patients. In a model-based approach, we estimate probabilities of good outcome (modified Rankin Scale score of ≤2 at 3 months) as a function of severity of stroke symptoms and transport times for the mothership approach and the drip and ship approach. We use these probabilities to obtain optimal RACE cutoff scores for different transfer time settings and combinations of treatment options (time-based eligibility for secondary transfer under the drip and ship approach, time-based eligibility for thrombolysis at the comprehensive stroke center under the mothership approach). In our model, patients with AIS are more likely to benefit from direct transportation to the comprehensive stroke center if they have more severe strokes. Values of the optimal RACE cutoff scores range from 0 (mothership for all patients) to >9 (drip and ship for all patients). Shorter transfer times and longer door-to-needle and needle-to-transfer (door out) times are associated with lower optimal RACE cutoff scores. Use of RACE cutoff scores that take into account transport times to triage AIS patients to the nearest appropriate hospital may lead to improved outcomes. Further studies should examine the feasibility of translation into clinical practice. © 2017 American Heart Association, Inc.

  3. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  4. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  5. Dynamic correlations at different time-scales with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, T.; Aste, Tomaso

    2018-07-01

    We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.

  6. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  7. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  8. A comment on the use of flushing time, residence time, and age as transport time scales

    USGS Publications Warehouse

    Monsen, N.E.; Cloern, J.E.; Lucas, L.V.; Monismith, Stephen G.

    2002-01-01

    Applications of transport time scales are pervasive in biological, hydrologic, and geochemical studies yet these times scales are not consistently defined and applied with rigor in the literature. We compare three transport time scales (flushing time, age, and residence time) commonly used to measure the retention of water or scalar quantities transported with water. We identify the underlying assumptions associated with each time scale, describe procedures for computing these time scales in idealized cases, and identify pitfalls when real-world systems deviate from these idealizations. We then apply the time scale definitions to a shallow 378 ha tidal lake to illustrate how deviations between real water bodies and the idealized examples can result from: (1) non-steady flow; (2) spatial variability in bathymetry, circulation, and transport time scales; and (3) tides that introduce complexities not accounted for in the idealized cases. These examples illustrate that no single transport time scale is valid for all time periods, locations, and constituents, and no one time scale describes all transport processes. We encourage aquatic scientists to rigorously define the transport time scale when it is applied, identify the underlying assumptions in the application of that concept, and ask if those assumptions are valid in the application of that approach for computing transport time scales in real systems.

  9. Comparing emerging and mature markets during times of crises: A non-extensive statistical approach

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Koohi Lai, Z.; Jafari, G. R.; Raei, R.; Tehrani, R.

    2013-07-01

    One of the important issues in finance and economics for both scholars and practitioners is to describe the behavior of markets, especially during times of crises. In this paper, we analyze the behavior of some mature and emerging markets with a Tsallis entropy framework that is a non-extensive statistical approach based on non-linear dynamics. During the past decade, this technique has been successfully applied to a considerable number of complex systems such as stock markets in order to describe the non-Gaussian behavior of these systems. In this approach, there is a parameter q, which is a measure of deviation from Gaussianity, that has proved to be a good index for detecting crises. We investigate the behavior of this parameter in different time scales for the market indices. It could be seen that the specified pattern for q differs for mature markets with regard to emerging markets. The findings show the robustness of the stated approach in order to follow the market conditions over time. It is obvious that, in times of crises, q is much greater than in other times. In addition, the response of emerging markets to global events is delayed compared to that of mature markets, and tends to a Gaussian profile on increasing the scale. This approach could be very useful in application to risk and portfolio management in order to detect crises by following the parameter q in different time scales.

  10. Time-sliced perturbation theory for large scale structure I: general formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less

  11. Time Scale Optimization and the Hunt for Astronomical Cycles in Deep Time Strata

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.

    2016-04-01

    A valuable attribute of astrochronology is the direct link between chronometer and climate change, providing a remarkable opportunity to constrain the evolution of the surficial Earth System. Consequently, the hunt for astronomical cycles in strata has spurred the development of a rich conceptual framework for climatic/oceanographic change, and has allowed exploration of the geologic record with unprecedented temporal resolution. Accompanying these successes, however, has been a persistent skepticism about appropriate astrochronologic testing and circular reasoning: how does one reliably test for astronomical cycles in stratigraphic data, especially when time is poorly constrained? From this perspective, it would seem that the merits and promise of astrochronology (e.g., a geologic time scale measured in ≤400 kyr increments) also serves as its Achilles heel, if the confirmation of such short rhythms defies rigorous statistical testing. To address these statistical challenges in astrochronologic testing, a new approach has been developed that (1) explicitly evaluates time scale uncertainty, (2) is resilient to common problems associated with spectrum confidence level assessment and 'multiple testing', and (3) achieves high statistical power under a wide range of conditions (it can identify astronomical cycles when present in data). Designated TimeOpt (for "time scale optimization"; Meyers 2015), the method employs a probabilistic linear regression model framework to investigate amplitude modulation and frequency ratios (bundling) in stratigraphic data, while simultaneously determining the optimal time scale. This presentation will review the TimeOpt method, and demonstrate how the flexible statistical framework can be further extended to evaluate (and optimize upon) complex sedimentation rate models, enhancing the statistical power of the approach, and addressing the challenge of unsteady sedimentation. Meyers, S. R. (2015), The evaluation of eccentricity-related amplitude modulation and bundling in paleoclimate data: An inverse approach for astrochronologic testing and time scale optimization, Paleoceanography, 30, doi:10.1002/ 2015PA002850.

  12. A comparative study of two approaches to analyse groundwater recharge, travel times and nitrate storage distribution at a regional scale

    NASA Astrophysics Data System (ADS)

    Turkeltaub, T.; Ascott, M.; Gooddy, D.; Jia, X.; Shao, M.; Binley, A. M.

    2017-12-01

    Understanding deep percolation, travel time processes and nitrate storage in the unsaturated zone at a regional scale is crucial for sustainable management of many groundwater systems. Recently, global hydrological models have been developed to quantify the water balance at such scales and beyond. However, the coarse spatial resolution of the global hydrological models can be a limiting factor when analysing regional processes. This study compares simulations of water flow and nitrate storage based on regional and global scale approaches. The first approach was applied over the Loess Plateau of China (LPC) to investigate the water fluxes and nitrate storage and travel time to the LPC groundwater system. Using raster maps of climate variables, land use data and soil parameters enabled us to determine fluxes by employing Richards' equation and the advection - dispersion equation. These calculations were conducted for each cell on the raster map in a multiple 1-D column approach. In the second approach, vadose zone travel times and nitrate storage were estimated by coupling groundwater recharge (PCR-GLOBWB) and nitrate leaching (IMAGE) models with estimates of water table depth and unsaturated zone porosity. The simulation results of the two methods indicate similar spatial groundwater recharge, nitrate storage and travel time distribution. Intensive recharge rates are located mainly at the south central and south west parts of the aquifer's outcrops. Particularly low recharge rates were simulated in the top central area of the outcrops. However, there are significant discrepancies between the simulated absolute recharge values, which might be related to the coarse scale that is used in the PCR-GLOBWB model, leading to smoothing of the recharge estimations. Both models indicated large nitrate inventories in the south central and south west parts of the aquifer's outcrops and the shortest travel times in the vadose zone are in the south central and east parts of the outcrops. Our results suggest that, for the LPC at least, global scale models might be useful for highlighting the locations with higher recharge rates potential and nitrate contamination risk. Global modelling simulations appear ideal as a primary step in recognizing locations which require investigations at the plot, field and local scales.

  13. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    PubMed

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  14. A wavelet based approach to measure and manage contagion at different time scales

    NASA Astrophysics Data System (ADS)

    Berger, Theo

    2015-10-01

    We decompose financial return series of US stocks into different time scales with respect to different market regimes. First, we examine dependence structure of decomposed financial return series and analyze the impact of the current financial crisis on contagion and changing interdependencies as well as upper and lower tail dependence for different time scales. Second, we demonstrate to which extent the information of different time scales can be used in the context of portfolio management. As a result, minimizing the variance of short-run noise outperforms a portfolio that minimizes the variance of the return series.

  15. Relativistic initial conditions for N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fidler, Christian; Tram, Thomas; Crittenden, Robert

    2017-06-01

    Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less

  16. Cosmogenic radionuclides as a synchronisation tool - present status

    NASA Astrophysics Data System (ADS)

    Muscheler, Raimund; Adolphi, Florian; Mekhaldi, Florian; Mellström, Anette; Svensson, Anders; Aldahan, Ala; Possnert, Göran

    2014-05-01

    Changes in the flux of galactic cosmic rays into Earth's atmosphere produce variations in the production rates of cosmogenic radionuclides. The resulting globally synchronous signal in cosmogenic radionuclide records can be used to compare time scales and synchronise climate records. The most prominent example is the 14C wiggle match dating approach where variations in the atmospheric 14C concentration are used to match climate records and the tree-ring based part of the 14C calibration record. This approach can be extended to other cosmogenic radionuclide records such as 10Be time series provided that the different geochemical behaviour of 10Be and 14C is taken into account. Here we will present some recent results that illustrate the potential of using cosmogenic radionuclide records for comparing and synchronising different time scales. The focus will be on the last 50000 years where we will show examples how geomagnetic field, solar activity and unusual short-term cosmic ray changes can be used for comparing ice core, tree ring and sediment time scales. We will discuss some unexpected offsets between Greenland ice core and 14C time scale and we will examine how far back in time solar induced 10Be and 14C variations presently can be used to reliably synchronise ice core and 14C time scales.

  17. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  18. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copeland, Alex; Brown, C. Titus

    2011-10-13

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  19. DOE JGI Quality Metrics; Approaches to Scaling and Improving Metagenome Assembly (Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Copeland, Alex; Brown, C. Titus

    2018-04-27

    DOE JGI's Alex Copeland on "DOE JGI Quality Metrics" and Michigan State University's C. Titus Brown on "Approaches to Scaling and Improving Metagenome Assembly" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  20. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  1. How High Frequency Trading Affects a Market Index

    PubMed Central

    Kenett, Dror Y.; Ben-Jacob, Eshel; Stanley, H. Eugene; gur-Gershgoren, Gitit

    2013-01-01

    The relationship between a market index and its constituent stocks is complicated. While an index is a weighted average of its constituent stocks, when the investigated time scale is one day or longer the index has been found to have a stronger effect on the stocks than vice versa. We explore how this interaction changes in short time scales using high frequency data. Using a correlation-based analysis approach, we find that in short time scales stocks have a stronger influence on the index. These findings have implications for high frequency trading and suggest that the price of an index should be published on shorter time scales, as close as possible to those of the actual transaction time scale. PMID:23817553

  2. Quantum-shutter approach to tunneling time scales with wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Norifumi; Garcia-Calderon, Gaston; Villavicencio, Jorge

    2005-07-15

    The quantum-shutter approach to tunneling time scales [G. Garcia-Calderon and A. Rubio, Phys. Rev. A 55, 3361 (1997)], which uses a cutoff plane wave as the initial condition, is extended to consider certain type of wave packet initial conditions. An analytical expression for the time-evolved wave function is derived. The time-domain resonance, the peaked structure of the probability density (as the function of time) at the exit of the barrier, originally found with the cutoff plane wave initial condition, is studied with the wave packet initial conditions. It is found that the time-domain resonance is not very sensitive to themore » width of the packet when the transmission process occurs in the tunneling regime.« less

  3. Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen

    2016-03-31

    In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less

  4. An efficient and reliable predictive method for fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-13

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  5. An efficient and reliable predictive method for fluidized bed simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Benyahia, Sofiane; Li, Tingwen

    2017-06-29

    In past decades, the continuum approach was the only practical technique to simulate large-scale fluidized bed reactors because discrete approaches suffer from the cost of tracking huge numbers of particles and their collisions. This study significantly improved the computation speed of discrete particle methods in two steps: First, the time-driven hard-sphere (TDHS) algorithm with a larger time-step is proposed allowing a speedup of 20-60 times; second, the number of tracked particles is reduced by adopting the coarse-graining technique gaining an additional 2-3 orders of magnitude speedup of the simulations. A new velocity correction term was introduced and validated in TDHSmore » to solve the over-packing issue in dense granular flow. The TDHS was then coupled with the coarse-graining technique to simulate a pilot-scale riser. The simulation results compared well with experiment data and proved that this new approach can be used for efficient and reliable simulations of large-scale fluidized bed systems.« less

  6. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  7. Advances in time-scale algorithms

    NASA Technical Reports Server (NTRS)

    Stein, S. R.

    1993-01-01

    The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.

  8. Development of internalizing problems from adolescence to emerging adulthood: Accounting for heterotypic continuity with vertical scaling.

    PubMed

    Petersen, Isaac T; Lindhiem, Oliver; LeBeau, Brandon; Bates, John E; Pettit, Gregory S; Lansford, Jennifer E; Dodge, Kenneth A

    2018-03-01

    Manifestations of internalizing problems, such as specific symptoms of anxiety and depression, can change across development, even if individuals show strong continuity in rank-order levels of internalizing problems. This illustrates the concept of heterotypic continuity, and raises the question of whether common measures might be construct-valid for one age but not another. This study examines mean-level changes in internalizing problems across a long span of development at the same time as accounting for heterotypic continuity by using age-appropriate, changing measures. Internalizing problems from age 14-24 were studied longitudinally in a community sample (N = 585), using Achenbach's Youth Self-Report (YSR) and Young Adult Self-Report (YASR). Heterotypic continuity was evaluated with an item response theory (IRT) approach to vertical scaling, linking different measures over time to be on the same scale, as well as with a Thurstone scaling approach. With vertical scaling, internalizing problems peaked in mid-to-late adolescence and showed a group-level decrease from adolescence to early adulthood, a change that would not have been seen with the approach of using only age-common items. Individuals' trajectories were sometimes different than would have been seen with the common-items approach. Findings support the importance of considering heterotypic continuity when examining development and vertical scaling to account for heterotypic continuity with changing measures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Thermodynamics constrains allometric scaling of optimal development time in insects.

    PubMed

    Dillon, Michael E; Frazier, Melanie R

    2013-01-01

    Development time is a critical life-history trait that has profound effects on organism fitness and on population growth rates. For ectotherms, development time is strongly influenced by temperature and is predicted to scale with body mass to the quarter power based on 1) the ontogenetic growth model of the metabolic theory of ecology which describes a bioenergetic balance between tissue maintenance and growth given the scaling relationship between metabolism and body size, and 2) numerous studies, primarily of vertebrate endotherms, that largely support this prediction. However, few studies have investigated the allometry of development time among invertebrates, including insects. Abundant data on development of diverse insects provides an ideal opportunity to better understand the scaling of development time in this ecologically and economically important group. Insects develop more quickly at warmer temperatures until reaching a minimum development time at some optimal temperature, after which development slows. We evaluated the allometry of insect development time by compiling estimates of minimum development time and optimal developmental temperature for 361 insect species from 16 orders with body mass varying over nearly 6 orders of magnitude. Allometric scaling exponents varied with the statistical approach: standardized major axis regression supported the predicted quarter-power scaling relationship, but ordinary and phylogenetic generalized least squares did not. Regardless of the statistical approach, body size alone explained less than 28% of the variation in development time. Models that also included optimal temperature explained over 50% of the variation in development time. Warm-adapted insects developed more quickly, regardless of body size, supporting the "hotter is better" hypothesis that posits that ectotherms have a limited ability to evolutionarily compensate for the depressing effects of low temperatures on rates of biological processes. The remaining unexplained variation in development time likely reflects additional ecological and evolutionary differences among insect species.

  10. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  11. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  12. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  13. Spatio-temporal hierarchy in the dynamics of a minimalist protein model

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen

    2013-12-01

    A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.

  14. Return Intervals Approach to Financial Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.

  15. Biointerface dynamics--Multi scale modeling considerations.

    PubMed

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The proximal-to-distal sequence in upper-limb motions on multiple levels and time scales.

    PubMed

    Serrien, Ben; Baeyens, Jean-Pierre

    2017-10-01

    The proximal-to-distal sequence is a phenomenon that can be observed in a large variety of motions of the upper limbs in both humans and other mammals. The mechanisms behind this sequence are not completely understood and motor control theories able to explain this phenomenon are currently incomplete. The aim of this narrative review is to take a theoretical constraints-led approach to the proximal-to-distal sequence and provide a broad multidisciplinary overview of relevant literature. This sequence exists at multiple levels (brain, spine, muscles, kinetics and kinematics) and on multiple time scales (motion, motor learning and development, growth and possibly even evolution). We hypothesize that the proximodistal spatiotemporal direction on each time scale and level provides part of the organismic constraints that guide the dynamics at the other levels and time scales. The constraint-led approach in this review may serve as a first onset towards integration of evidence and a framework for further experimentation to reveal the dynamics of the proximal-to-distal sequence. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Dynamic structural disorder in supported nanoscale catalysts

    NASA Astrophysics Data System (ADS)

    Rehr, J. J.; Vila, F. D.

    2014-04-01

    We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.

  18. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  19. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).

  20. Network rewiring dynamics with convergence towards a star network

    PubMed Central

    Dick, G.; Parry, M.

    2016-01-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396

  1. Network rewiring dynamics with convergence towards a star network.

    PubMed

    Whigham, P A; Dick, G; Parry, M

    2016-10-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.

  2. Mesoscale Models of Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Boghosian, Bruce M.; Hadjiconstantinou, Nicolas G.

    During the last half century, enormous progress has been made in the field of computational materials modeling, to the extent that in many cases computational approaches are used in a predictive fashion. Despite this progress, modeling of general hydrodynamic behavior remains a challenging task. One of the main challenges stems from the fact that hydrodynamics manifests itself over a very wide range of length and time scales. On one end of the spectrum, one finds the fluid's "internal" scale characteristic of its molecular structure (in the absence of quantum effects, which we omit in this chapter). On the other end, the "outer" scale is set by the characteristic sizes of the problem's domain. The resulting scale separation or lack thereof as well as the existence of intermediate scales are key to determining the optimal approach. Successful treatments require a judicious choice of the level of description which is a delicate balancing act between the conflicting requirements of fidelity and manageable computational cost: a coarse description typically requires models for underlying processes occuring at smaller length and time scales; on the other hand, a fine-scale model will incur a significantly larger computational cost.

  3. IMPLEMENTATION OF FIRST-PASSAGE TIME APPROACH FOR OBJECT KINETIC MONTE CARLO SIMULATIONS OF IRRADIATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandipati, Giridhar; Setyawan, Wahyu; Heinisch, Howard L.

    2014-06-30

    The objective of the work is to implement a first-passage time (FPT) approach to deal with very fast 1D diffusing SIA clusters in KSOME (kinetic simulations of microstructural evolution) [1] to achieve longer time-scales during irradiation damage simulations. The goal is to develop FPT-KSOME, which has the same flexibility as KSOME.

  4. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  5. Compression based entropy estimation of heart rate variability on multiple time scales.

    PubMed

    Baumert, Mathias; Voss, Andreas; Javorka, Michal

    2013-01-01

    Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.

  6. New analytic results for speciation times in neutral models.

    PubMed

    Gernhard, Tanja

    2008-05-01

    In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.

  7. Spatio-temporal Granger causality: a new framework

    PubMed Central

    Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng

    2015-01-01

    That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924

  8. Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise

    PubMed Central

    Murakami, Hisashi

    2018-01-01

    Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074

  9. Electron acceleration by an obliquely propagating electromagnetic wave in the regime of validity of the Fokker-Planck-Kolmogorov approach

    NASA Technical Reports Server (NTRS)

    Hizanidis, Kyriakos; Vlahos, L.; Polymilis, C.

    1989-01-01

    The relativistic motion of an ensemble of electrons in an intense monochromatic electromagnetic wave propagating obliquely in a uniform external magnetic field is studied. The problem is formulated from the viewpoint of Hamiltonian theory and the Fokker-Planck-Kolmogorov approach analyzed by Hizanidis (1989), leading to a one-dimensional diffusive acceleration along paths of constant zeroth-order generalized Hamiltonian. For values of the wave amplitude and the propagating angle inside the analytically predicted stochastic region, the numerical results suggest that the diffusion probes proceeds in stages. In the first stage, the electrons are accelerated to relatively high energies by sampling the first few overlapping resonances one by one. During that stage, the ensemble-average square deviation of the variable involved scales quadratically with time. During the second stage, they scale linearly with time. For much longer times, deviation from linear scaling slowly sets in.

  10. Transition Manifolds of Complex Metastable Systems: Theory and Data-Driven Computation of Effective Dynamics.

    PubMed

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-01-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  11. Response to Comment on "Does the Earth Have an Adaptive Infrared IRIS?"

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Chou, Ming-Dah; Lindzen, Richard S.; Hou, Arthur Y.

    2001-01-01

    Harrison's (2001) Comment on the Methodology in Lindzen et al (2001) has prompted re-examination of several aspects of study. Probably the most significant disagreement in our conclusions is due to our different approaches to minimizing the influence of long-time-scale variations in the variables A and T on the results. Given the strength of the annual cycle and the 20-month period covered by the data, we believe that removing monthly means is a better approach to minimizing the long-time-scale behavior of the data than removal of the linear trend, which might actually add spurious long- time- scale variability into the modified data. We have also indicated how our statistical methods of establishing statistical significance differ. More definitive conclusions may only possible after more data have been analyzed, but we feel that our results are robust enough to encourage further study of this phenomenon.

  12. Transition Manifolds of Complex Metastable Systems

    NASA Astrophysics Data System (ADS)

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-04-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  13. Toward a comprehensive landscape vegetation monitoring framework

    NASA Astrophysics Data System (ADS)

    Kennedy, Robert; Hughes, Joseph; Neeti, Neeti; Larrue, Tara; Gregory, Matthew; Roberts, Heather; Ohmann, Janet; Kane, Van; Kane, Jonathan; Hooper, Sam; Nelson, Peder; Cohen, Warren; Yang, Zhiqiang

    2016-04-01

    Blossoming Earth observation resources provide great opportunity to better understand land vegetation dynamics, but also require new techniques and frameworks to exploit their potential. Here, I describe several parallel projects that leverage time-series Landsat imagery to describe vegetation dynamics at regional and continental scales. At the core of these projects are the LandTrendr algorithms, which distill time-series earth observation data into periods of consistent long or short-duration dynamics. In one approach, we built an integrated, empirical framework to blend these algorithmically-processed time-series data with field data and lidar data to ascribe yearly change in forest biomass across the US states of Washington, Oregon, and California. In a separate project, we expanded from forest-only monitoring to full landscape land cover monitoring over the same regional scale, including both categorical class labels and continuous-field estimates. In these and other projects, we apply machine-learning approaches to ascribe all changes in vegetation to driving processes such as harvest, fire, urbanization, etc., allowing full description of both disturbance and recovery processes and drivers. Finally, we are moving toward extension of these same techniques to continental and eventually global scales using Google Earth Engine. Taken together, these approaches provide one framework for describing and understanding processes of change in vegetation communities at broad scales.

  14. Event-based estimation of water budget components using the network of multi-sensor capacitance probes

    USDA-ARS?s Scientific Manuscript database

    A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...

  15. Divided-evolution-based pulse scheme for quantifying exchange processes in proteins: powerful complement to relaxation dispersion experiments.

    PubMed

    Bouvignies, Guillaume; Hansen, D Flemming; Vallurupalli, Pramodh; Kay, Lewis E

    2011-02-16

    A method for quantifying millisecond time scale exchange in proteins is presented based on scaling the rate of chemical exchange using a 2D (15)N, (1)H(N) experiment in which (15)N dwell times are separated by short spin-echo pulse trains. Unlike the popular Carr-Purcell-Meiboom-Gill (CPMG) experiment where the effects of a radio frequency field on measured transverse relaxation rates are quantified, the new approach measures peak positions in spectra that shift as the effective exchange time regime is varied. The utility of the method is established through an analysis of data recorded on an exchanging protein-ligand system for which the exchange parameters have been accurately determined using alternative approaches. Computations establish that a combined analysis of CPMG and peak shift profiles extends the time scale that can be studied to include exchanging systems with highly skewed populations and exchange rates as slow as 20 s(-1).

  16. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  17. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    NASA Astrophysics Data System (ADS)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  18. Assessing global vegetation activity using spatio-temporal Bayesian modelling

    NASA Astrophysics Data System (ADS)

    Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.

    2016-04-01

    This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support our hypothesis. That is, the change of vegetation in space and time may be better understood when modelling vegetation change as both a dynamic and multivariate process. Therefore, future research will focus on a multivariate dynamical spatio-temporal modelling approach. This ongoing research is performed within the context of the project "Global impacts of hydrological and climatic extremes on vegetation" (project acronym: SAT-EX) which is part of the Belgian research programme for Earth Observation Stereo III.

  19. Multiscale recurrence quantification analysis of order recurrence plots

    NASA Astrophysics Data System (ADS)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Chun-Yaung; Perez, Danny; Voter, Arthur F., E-mail: afv@lanl.gov

    Nuclear quantum effects are important for systems containing light elements, and the effects are more prominent in the low temperature regime where the dynamics also becomes sluggish. We show that parallel replica (ParRep) dynamics, an accelerated molecular dynamics approach for infrequent-event systems, can be effectively combined with ring-polymer molecular dynamics, a semiclassical trajectory approach that gives a good approximation to zero-point and tunneling effects in activated escape processes. The resulting RP-ParRep method is a powerful tool for reaching long time scales in complex infrequent-event systems where quantum dynamics are important. Two illustrative examples, symmetric Eckart barrier crossing and interstitial heliummore » diffusion in Fe and Fe–Cr alloy, are presented to demonstrate the accuracy and long-time scale capability of this approach.« less

  1. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-02-29

    Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal

  2. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  3. Multiple-time-scale motion in molecularly linked nanoparticle arrays.

    PubMed

    George, Christopher; Szleifer, Igal; Ratner, Mark

    2013-01-22

    We explore the transport of electrons between electrodes that encase a two-dimensional array of metallic quantum dots linked by molecular bridges (such as α,ω alkaline dithiols). Because the molecules can move at finite temperatures, the entire transport structure comprising the quantum dots and the molecules is in dynamical motion while the charge is being transported. There are then several physical processes (physical excursions of molecules and quantum dots, electronic migration, ordinary vibrations), all of which influence electronic transport. Each can occur on a different time scale. It is therefore not appropriate to use standard approaches to this sort of electron transfer problem. Instead, we present a treatment in which three different theoretical approaches-kinetic Monte Carlo, classical molecular dynamics, and quantum transport-are all employed. In certain limits, some of the dynamical effects are unimportant. But in general, the transport seems to follow a sort of dynamic bond percolation picture, an approach originally introduced as formal models and later applied to polymer electrolytes. Different rate-determining steps occur in different limits. This approach offers a powerful scheme for dealing with multiple time scale transport problems, as will exist in many situations with several pathways through molecular arrays or even individual molecules that are dynamically disordered.

  4. Reaching extended length-scales with accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Hubartt, Bradley; Shim, Yunsic; Amar, Jacques

    2012-02-01

    While temperature-accelerated dynamics (TAD) has been quite successful in extending the time-scales for non-equilibrium simulations of small systems, the computational time increases rapidly with system size. One possible solution to this problem, which we refer to as parTAD^1 is to use spatial decomposition combined with our previously developed semi-rigorous synchronous sublattice algorithm^2. However, while such an approach leads to significantly better scaling as a function of system-size, it also artificially limits the size of activated events and is not completely rigorous. Here we discuss progress we have made in developing an alternative approach in which localized saddle-point searches are combined with parallel GPU-based molecular dynamics in order to improve the scaling behavior. By using this method, along with the use of an adaptive method to determine the optimal high-temperature^3, we have been able to significantly increase the range of time- and length-scales over which accelerated dynamics simulations may be carried out. [1] Y. Shim et al, Phys. Rev. B 76, 205439 (2007); ibid, Phys. Rev. Lett. 101, 116101 (2008). [2] Y. Shim and J.G. Amar, Phys. Rev. B 71, 125432 (2005). [3] Y. Shim and J.G. Amar, J. Chem. Phys. 134, 054127 (2011).

  5. Accessible methods for the dynamic time-scale decomposition of biochemical systems.

    PubMed

    Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula

    2009-11-01

    The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.

  6. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  7. Time-variant Lagrangian transport formulation reduces aggregation bias of water and solute mean travel time in heterogeneous catchments

    NASA Astrophysics Data System (ADS)

    Danesh-Yazdi, Mohammad; Botter, Gianluca; Foufoula-Georgiou, Efi

    2017-05-01

    Lack of hydro-bio-chemical data at subcatchment scales necessitates adopting an aggregated system approach for estimating water and solute transport properties, such as residence and travel time distributions, at the catchment scale. In this work, we show that within-catchment spatial heterogeneity, as expressed in spatially variable discharge-storage relationships, can be appropriately encapsulated within a lumped time-varying stochastic Lagrangian formulation of transport. This time (variability) for space (heterogeneity) substitution yields mean travel times (MTTs) that are not significantly biased to the aggregation of spatial heterogeneity. Despite the significant variability of MTT at small spatial scales, there exists a characteristic scale above which the MTT is not impacted by the aggregation of spatial heterogeneity. Extensive simulations of randomly generated river networks reveal that the ratio between the characteristic scale and the mean incremental area is on average independent of river network topology and the spatial arrangement of incremental areas.

  8. Using analogy to learn about phenomena at scales outside human perception.

    PubMed

    Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S; Shipley, Thomas F

    2017-01-01

    Understanding and reasoning about phenomena at scales outside human perception (for example, geologic time) is critical across science, technology, engineering, and mathematics. Thus, devising strong methods to support acquisition of reasoning at such scales is an important goal in science, technology, engineering, and mathematics education. In two experiments, we examine the use of analogical principles in learning about geologic time. Across both experiments we find that using a spatial analogy (for example, a time line) to make multiple alignments, and keeping all unrelated components of the analogy held constant (for example, keep the time line the same length), leads to better understanding of the magnitude of geologic time. Effective approaches also include hierarchically and progressively aligning scale information (Experiment 1) and active prediction in making alignments paired with immediate feedback (Experiments 1 and 2).

  9. Estimating Agricultural Nitrous Oxide Emissions

    USDA-ARS?s Scientific Manuscript database

    Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...

  10. Cross-scale interactions: Quantifying multi-scaled cause–effect relationships in macrosystems

    USGS Publications Warehouse

    Soranno, Patricia A.; Cheruvelil, Kendra S.; Bissell, Edward G.; Bremigan, Mary T.; Downing, John A.; Fergus, Carol E.; Filstrup, Christopher T.; Henry, Emily N.; Lottig, Noah R.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2014-01-01

    Ecologists are increasingly discovering that ecological processes are made up of components that are multi-scaled in space and time. Some of the most complex of these processes are cross-scale interactions (CSIs), which occur when components interact across scales. When undetected, such interactions may cause errors in extrapolation from one region to another. CSIs, particularly those that include a regional scaled component, have not been systematically investigated or even reported because of the challenges of acquiring data at sufficiently broad spatial extents. We present an approach for quantifying CSIs and apply it to a case study investigating one such interaction, between local and regional scaled land-use drivers of lake phosphorus. Ultimately, our approach for investigating CSIs can serve as a basis for efforts to understand a wide variety of multi-scaled problems such as climate change, land-use/land-cover change, and invasive species.

  11. Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG

    NASA Astrophysics Data System (ADS)

    Stoudenmire, Edwin Miles; White, Steven

    We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.

  12. Cross scale interactions, nonlinearities, and forecasting catastrophic events

    USGS Publications Warehouse

    Peters, Debra P.C.; Pielke, Roger A.; Bestelmeyer, Brandon T.; Allen, Craig D.; Munson-McGee, Stuart; Havstad, Kris M.

    2004-01-01

    Catastrophic events share characteristic nonlinear behaviors that are often generated by cross-scale interactions and feedbacks among system elements. These events result in surprises that cannot easily be predicted based on information obtained at a single scale. Progress on catastrophic events has focused on one of the following two areas: nonlinear dynamics through time without an explicit consideration of spatial connectivity [Holling, C. S. (1992) Ecol. Monogr. 62, 447–502] or spatial connectivity and the spread of contagious processes without a consideration of cross-scale interactions and feedbacks [Zeng, N., Neeling, J. D., Lau, L. M. & Tucker, C. J. (1999) Science 286, 1537–1540]. These approaches rarely have ventured beyond traditional disciplinary boundaries. We provide an interdisciplinary, conceptual, and general mathematical framework for understanding and forecasting nonlinear dynamics through time and across space. We illustrate the generality and usefulness of our approach by using new data and recasting published data from ecology (wildfires and desertification), epidemiology (infectious diseases), and engineering (structural failures). We show that decisions that minimize the likelihood of catastrophic events must be based on cross-scale interactions, and such decisions will often be counterintuitive. Given the continuing challenges associated with global change, approaches that cross disciplinary boundaries to include interactions and feedbacks at multiple scales are needed to increase our ability to predict catastrophic events and develop strategies for minimizing their occurrence and impacts. Our framework is an important step in developing predictive tools and designing experiments to examine cross-scale interactions.

  13. Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor K.

    2011-01-01

    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time). PMID:21980278

  14. An ultra scale-down approach to study the interaction of fermentation, homogenization, and centrifugation for antibody fragment recovery from rec E. coli.

    PubMed

    Li, Qiang; Mannall, Gareth J; Ali, Shaukat; Hoare, Mike

    2013-08-01

    Escherichia coli is frequently used as a microbial host to express recombinant proteins but it lacks the ability to secrete proteins into medium. One option for protein release is to use high-pressure homogenization followed by a centrifugation step to remove cell debris. While this does not give selective release of proteins in the periplasmic space, it does provide a robust process. An ultra scale-down (USD) approach based on focused acoustics is described to study rec E. coli cell disruption by high-pressure homogenization for recovery of an antibody fragment (Fab') and the impact of fermentation harvest time. This approach is followed by microwell-based USD centrifugation to study the removal of the resultant cell debris. Successful verification of this USD approach is achieved using pilot scale high-pressure homogenization and pilot scale, continuous flow, disc stack centrifugation comparing performance parameters such as the fraction of Fab' release, cell debris size distribution and the carryover of cell debris fine particles in the supernatant. The integration of fermentation and primary recovery stages is examined using USD monitoring of different phases of cell growth. Increasing susceptibility of the cells to disruption is observed with time following induction. For a given recovery process this results in a higher fraction of product release and a greater proportion of fine cell debris particles that are difficult to remove by centrifugation. Such observations are confirmed at pilot scale. Copyright © 2013 Wiley Periodicals, Inc.

  15. Environmental and social determinants of population vulnerability to Zika virus emergence at the local scale.

    PubMed

    Rees, Erin E; Petukhova, Tatiana; Mascarenhas, Mariola; Pelcat, Yann; Ogden, Nicholas H

    2018-05-08

    Zika virus (ZIKV) spread rapidly in the Americas in 2015. Targeting effective public health interventions for inhabitants of, and travellers to and from, affected countries depends on understanding the risk of ZIKV emergence (and re-emergence) at the local scale. We explore the extent to which environmental, social and neighbourhood disease intensity variables influenced emergence dynamics. Our objective was to characterise population vulnerability given the potential for sustained autochthonous ZIKV transmission and the timing of emergence. Logistic regression models estimated the probability of reporting at least one case of ZIKV in a given municipality over the course of the study period as an indicator for sustained transmission; while accelerated failure time (AFT) survival models estimated the time to a first reported case of ZIKV in week t for a given municipality as an indicator for timing of emergence. Sustained autochthonous ZIKV transmission was best described at the temporal scale of the study period (almost one year), such that high levels of study period precipitation and low mean study period temperature reduced the probability. Timing of ZIKV emergence was best described at the weekly scale for precipitation in that high precipitation in the current week delayed reporting. Both modelling approaches detected an effect of high poverty on reducing/slowing case detection, especially when inter-municipal road connectivity was low. We also found that proximity to municipalities reporting ZIKV had an effect to reduce timing of emergence when located, on average, less than 100 km away. The different modelling approaches help distinguish between large temporal scale factors driving vector habitat suitability and short temporal scale factors affecting the speed of spread. We find evidence for inter-municipal movements of infected people as a local-scale driver of spatial spread. The negative association with poverty suggests reduced case reporting in poorer areas. Overall, relatively simplistic models may be able to predict the vulnerability of populations to autochthonous ZIKV transmission at the local scale.

  16. Real-time detection of antibiotic activity by measuring nanometer-scale bacterial deformation

    NASA Astrophysics Data System (ADS)

    Iriya, Rafael; Syal, Karan; Jing, Wenwen; Mo, Manni; Yu, Hui; Haydel, Shelley E.; Wang, Shaopeng; Tao, Nongjian

    2017-12-01

    Diagnosing antibiotic-resistant bacteria currently requires sensitive detection of phenotypic changes associated with antibiotic action on bacteria. Here, we present an optical imaging-based approach to quantify bacterial membrane deformation as a phenotypic feature in real-time with a nanometer scale (˜9 nm) detection limit. Using this approach, we found two types of antibiotic-induced membrane deformations in different bacterial strains: polymyxin B induced relatively uniform spatial deformation of Escherichia coli O157:H7 cells leading to change in cellular volume and ampicillin-induced localized spatial deformation leading to the formation of bulges or protrusions on uropathogenic E. coli CFT073 cells. We anticipate that the approach will contribute to understanding of antibiotic phenotypic effects on bacteria with a potential for applications in rapid antibiotic susceptibility testing.

  17. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  18. Postcoalescence evolution of growth stress in polycrystalline films.

    PubMed

    González-González, A; Polop, C; Vasco, E

    2013-02-01

    The growth stress generated once grains coalesce in Volmer-Weber-type thin films is investigated by time-multiscale simulations comprising complementary modules of (i) finite-element modeling to address the interactions between grains happening at atomic vibration time scales (~0.1 ps), (ii) dynamic scaling to account for the surface stress relaxation via morphology changes at surface diffusion time scales (~μs-ms), and (iii) the mesoscopic rate equation approach to simulate the bulk stress relaxation at deposition time scales (~sec-h). On the basis of addressing the main experimental evidence reported so far on the topic dealt with, the simulation results provide key findings concerning the interplay between anisotropic grain interactions at complementary space scales, deposition conditions (such as flux and mobility), and mechanisms of stress accommodation-relaxation, which underlies the origin, nature and spatial distribution, and the flux dependence of the postcoalescence growth stress.

  19. Intercalibration of radioisotopic and astrochronologic time scales for the Cenomanian-Turonian boundary interval, western interior Basin, USA

    USGS Publications Warehouse

    Meyers, S.R.; Siewert, S.E.; Singer, B.S.; Sageman, B.B.; Condon, D.J.; Obradovich, J.D.; Jicha, B.R.; Sawyer, D.A.

    2012-01-01

    We develop an intercalibrated astrochronologic and radioisotopic time scale for the Cenomanian-Turonian boundary (CTB) interval near the Global Stratotype Section and Point in Colorado, USA, where orbitally influenced rhythmic strata host bentonites that contain sanidine and zircon suitable for 40Ar/ 39Ar and U-Pb dating. Paired 40Ar/ 39Ar and U-Pb ages are determined from four bentonites that span the Vascoceras diartianum to Pseudaspidoceras flexuosum ammonite biozones, utilizing both newly collected material and legacy sanidine samples of J. Obradovich. Comparison of the 40Ar/ 39Ar and U-Pb results underscores the strengths and limitations of each system, and supports an astronomically calibrated Fish Canyon sanidine standard age of 28.201 Ma. The radioisotopic data and published astrochronology are employed to develop a new CTB time scale, using two statistical approaches: (1) a simple integration that yields a CTB age of 93.89 ?? 0.14 Ma (2??; total radioisotopic uncertainty), and (2) a Bayesian intercalibration that explicitly accounts for orbital time scale uncertainty, and yields a CTB age of 93.90 ?? 0.15 Ma (95% credible interval; total radioisotopic and orbital time scale uncertainty). Both approaches firmly anchor the floating orbital time scale, and the Bayesian technique yields astronomically recalibrated radioisotopic ages for individual bentonites, with analytical uncertainties at the permil level of resolution, and total uncertainties below 2???. Using our new results, the duration between the Cenomanian-Turonian and the Cretaceous-Paleogene boundaries is 27.94 ?? 0.16 Ma, with an uncertainty of less than one-half of a long eccentricity cycle. ?? 2012 Geological Society of America.

  20. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  1. Quantitative Tracking of Combinatorially Engineered Populations with Multiplexed Binary Assemblies.

    PubMed

    Zeitoun, Ramsey I; Pines, Gur; Grau, Willliam C; Gill, Ryan T

    2017-04-21

    Advances in synthetic biology and genomics have enabled full-scale genome engineering efforts on laboratory time scales. However, the absence of sufficient approaches for mapping engineered genomes at system-wide scales onto performance has limited the adoption of more sophisticated algorithms for engineering complex biological systems. Here we report on the development and application of a robust approach to quantitatively map combinatorially engineered populations at scales up to several dozen target sites. This approach works by assembling genome engineered sites with cell-specific barcodes into a format compatible with high-throughput sequencing technologies. This approach, called barcoded-TRACE (bTRACE) was applied to assess E. coli populations engineered by recursive multiplex recombineering across both 6-target sites and 31-target sites. The 31-target library was then tracked throughout growth selections in the presence and absence of isopentenol (a potential next-generation biofuel). We also use the resolution of bTRACE to compare the influence of technical and biological noise on genome engineering efforts.

  2. Characterizing and understanding the climatic determinism of high- to low-frequency variations in precipitation in northwestern France using a coupled wavelet multiresolution/statistical downscaling approach

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Hannah, David; Lavers, David; Fossa, Manuel; Laignel, Benoit; Debret, Maxime

    2017-04-01

    Geophysical signals oscillate over several time-scales that explain different amount of their overall variability and may be related to different physical processes. Characterizing and understanding such variabilities in hydrological variations and investigating their determinism is one important issue in a context of climate change, as these variabilities can be occasionally superimposed to long-term trend possibly due to climate change. It is also important to refine our understanding of time-scale dependent linkages between large-scale climatic variations and hydrological responses on the regional or local-scale. Here we investigate such links by conducting a wavelet multiresolution statistical dowscaling approach of precipitation in northwestern France (Seine river catchment) over 1950-2016 using sea level pressure (SLP) and sea surface temperature (SST) as indicators of atmospheric and oceanic circulations, respectively. Previous results demonstrated that including multiresolution decomposition in a statistical downscaling model (within a so-called multiresolution ESD model) using SLP as large-scale predictor greatly improved simulation of low-frequency, i.e. interannual to interdecadal, fluctuations observed in precipitation. Building on these results, continuous wavelet transform of simulated precipiation using multiresolution ESD confirmed the good performance of the model to better explain variability at all time-scales. A sensitivity analysis of the model to the choice of the scale and wavelet function used was also tested. It appeared that whatever the wavelet used, the model performed similarly. The spatial patterns of SLP found as the best predictors for all time-scales, which resulted from the wavelet decomposition, revealed different structures according to time-scale, showing possible different determinisms. More particularly, some low-frequency components ( 3.2-yr and 19.3-yr) showed a much wide-spread spatial extentsion across the Atlantic. Moreover, in accordance with other previous studies, the wavelet components detected in SLP and precipitation on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation. Current works are now conducted including SST over the Atlantic in order to get further insights into this mechanism.

  3. Physics in space-time with scale-dependent metrics

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  4. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  5. Accurate traveltime computation in complex anisotropic media with discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.

    2017-12-01

    Travel time computation is of major interest for a large range of geophysical applications, among which source localization and characterization, phase identification, data windowing and tomography, from decametric scale up to global Earth scale.Ray-tracing tools, being essentially 1D Lagrangian integration along a path, have been used for their efficiency but present some drawbacks, such as a rather difficult control of the medium sampling. Moreover, they do not provide answers in shadow zones. Eikonal solvers, based on an Eulerian approach, have attracted attention in seismology with the pioneering work of Vidale (1988), while such approach has been proposed earlier by Riznichenko (1946). They have been used now for first-arrival travel-time tomography at various scales (Podvin & Lecomte (1991). The framework for solving this non-linear partial differential equation is now well understood and various finite-difference approaches have been proposed, essentially for smooth media. We propose a novel finite element approach which builds a precise solution for strongly heterogeneous anisotropic medium (still in the limit of Eikonal validity). The discontinuous Galerkin method we have developed allows local refinement of the mesh and local high orders of interpolation inside elements. High precision of the travel times and its spatial derivatives is obtained through this formulation. This finite element method also honors boundary conditions, such as complex topographies and absorbing boundaries for mimicking an infinite medium. Applications from travel-time tomography, slope tomography are expected, but also for migration and take-off angles estimation, thanks to the accuracy obtained when computing first-arrival times.References:Podvin, P. and Lecomte, I., 1991. Finite difference computation of traveltimes in very contrasted velocity model: a massively parallel approach and its associated tools, Geophys. J. Int., 105, 271-284.Riznichenko, Y., 1946. Geometrical seismics of layered media, Trudy Inst. Theor. Geophysics, Vol II, Moscow (in Russian).Vidale, J., 1988. Finite-difference calculation of travel times, Bull. seism. Soc. Am., 78, 2062-2076.

  6. Multiscaling for systems with a broad continuum of characteristic lengths and times: Structural transitions in nanocomposites.

    PubMed

    Pankavich, S; Ortoleva, P

    2010-06-01

    The multiscale approach to N-body systems is generalized to address the broad continuum of long time and length scales associated with collective behaviors. A technique is developed based on the concept of an uncountable set of time variables and of order parameters (OPs) specifying major features of the system. We adopt this perspective as a natural extension of the commonly used discrete set of time scales and OPs which is practical when only a few, widely separated scales exist. The existence of a gap in the spectrum of time scales for such a system (under quasiequilibrium conditions) is used to introduce a continuous scaling and perform a multiscale analysis of the Liouville equation. A functional-differential Smoluchowski equation is derived for the stochastic dynamics of the continuum of Fourier component OPs. A continuum of spatially nonlocal Langevin equations for the OPs is also derived. The theory is demonstrated via the analysis of structural transitions in a composite material, as occurs for viral capsids and molecular circuits.

  7. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  8. More Time or Better Tools? A Large-Scale Retrospective Comparison of Pedagogical Approaches to Teach Programming

    ERIC Educational Resources Information Center

    Silva-Maceda, Gabriela; Arjona-Villicaña, P. David; Castillo-Barrera, F. Edgar

    2016-01-01

    Learning to program is a complex task, and the impact of different pedagogical approaches to teach this skill has been hard to measure. This study examined the performance data of seven cohorts of students (N = 1168) learning programming under three different pedagogical approaches. These pedagogical approaches varied either in the length of the…

  9. Investigating lithium-ion battery materials during overcharge-induced thermal runaway: an operando and multi-scale X-ray CT study.

    PubMed

    Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R

    2016-11-16

    Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.

  10. Adaptive neural network decentralized backstepping output-feedback control for nonlinear large-scale systems with time delays.

    PubMed

    Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang

    2011-07-01

    In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.

  11. Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach

    NASA Astrophysics Data System (ADS)

    Geistlinger, Helmut; Jia, Ruijan

    2010-05-01

    Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm). From the comparative study of relevant scenarios with and without biodegradation it can be concluded that, under realistic field conditions, biodegradation within the immobile water phase is often mass-transfer limited and the local equilibrium approach assuming instantaneous mass transfer becomes rather questionable. References Geistlinger, H., Ruiyan Jia, D. Eisermann, and C.-F. Stange (2008): Spatial and temporal variability of dissolved nitrous oxide in near-surface groundwater and bubble-mediated mass transfer to the unsaturated zone, J. Plant Nutrition and Soil Science, in press. Geistlinger, H. (2009) Vapor transport in soil: concepts and mathematical description. In: Eds.: S. Saponari, E. Sezenna, and L. Bonoma, Vapor emission to outdoor air and enclosed spaces for human health risk assessment: Site characterization, monitoring, and modeling. Nova Science Publisher. Milano. Accepted for publication.

  12. A study of flame spread in engineered cardboard fuelbeds: Part II: Scaling law approach

    Treesearch

    Brittany A. Adam; Nelson K. Akafuah; Mark Finney; Jason Forthofer; Kozo Saito

    2013-01-01

    In this second part of a two part exploration of dynamic behavior observed in wildland fires, time scales differentiating convective and radiative heat transfer is further explored. Scaling laws for the two different types of heat transfer considered: Radiation-driven fire spread, and convection-driven fire spread, which can both occur during wildland fires. A new...

  13. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  14. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  15. Direct measurement of sub-surface mass change using the variable-baseline gravity gradient method

    USGS Publications Warehouse

    Kennedy, Jeffrey; Ferré, Ty P.A.; Güntner, Andreas; Abe, Maiko; Creutzfeldt, Benjamin

    2014-01-01

    Time-lapse gravity data provide a direct, non-destructive method to monitor mass changes at scales from cm to km. But, the effectively infinite spatial sensitivity of gravity measurements can make it difficult to isolate the signal of interest. The variable-baseline gravity gradient method, based on the difference of measurements between two gravimeters, is an alternative to the conventional approach of individually modeling all sources of mass and elevation change. This approach can improve the signal-to-noise ratio for many applications by removing the contributions of Earth tides, loading, and other signals that have the same effect on both gravimeters. At the same time, this approach can focus the support volume within a relatively small user-defined region of the subsurface. The method is demonstrated using paired superconducting gravimeters to make for the first time a large-scale, non-invasive measurement of infiltration wetting front velocity and change in water content above the wetting front.

  16. Nutritional Systems Biology Modeling: From Molecular Mechanisms to Physiology

    PubMed Central

    de Graaf, Albert A.; Freidig, Andreas P.; De Roos, Baukje; Jamshidi, Neema; Heinemann, Matthias; Rullmann, Johan A.C.; Hall, Kevin D.; Adiels, Martin; van Ommen, Ben

    2009-01-01

    The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today's important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales. PMID:19956660

  17. Understanding the source of multifractality in financial markets

    NASA Astrophysics Data System (ADS)

    Barunik, Jozef; Aste, Tomaso; Di Matteo, T.; Liu, Ruipeng

    2012-09-01

    In this paper, we use the generalized Hurst exponent approach to study the multi-scaling behavior of different financial time series. We show that this approach is robust and powerful in detecting different types of multi-scaling. We observe a puzzling phenomenon where an apparent increase in multifractality is measured in time series generated from shuffled returns, where all time-correlations are destroyed, while the return distributions are conserved. This effect is robust and it is reproduced in several real financial data including stock market indices, exchange rates and interest rates. In order to understand the origin of this effect we investigate different simulated time series by means of the Markov switching multifractal model, autoregressive fractionally integrated moving average processes with stable innovations, fractional Brownian motion and Levy flights. Overall we conclude that the multifractality observed in financial time series is mainly a consequence of the characteristic fat-tailed distribution of the returns and time-correlations have the effect to decrease the measured multifractality.

  18. Quantifying Ecological Memory of Plant and Ecosystem Processes in Variable Environments

    NASA Astrophysics Data System (ADS)

    Ogle, K.; Barron-Gafford, G. A.; Bentley, L.; Cable, J.; Lucas, R.; Huxman, T. E.; Loik, M. E.; Smith, S. D.; Tissue, D.

    2010-12-01

    Precipitation, soil water, and other factors affect plant and ecosystem processes at multiple time scales. A common assumption is that water availability at a given time directly affects processes at that time. Recent work, especially in pulse-driven, semiarid systems, shows that antecedent water availability, averaged over several days to a couple weeks, can be just as or more important than current water status. Precipitation patterns of previous seasons or past years can also impact plant and ecosystem functioning in many systems. However, we lack an analytical framework for quantifying the importance of and time-scale over which past conditions affect current processes. This study explores the ecological memory of a variety of plant and ecosystem processes. We use memory as a metaphor to describe the time-scale over which antecedent conditions affect the current process. Existing approaches for incorporating antecedent effects arbitrarily select the antecedent integration period (e.g., the past 2 weeks) and the relative importance of past conditions (e.g., assign equal or linearly decreasing weights to past events). In contrast, we utilize a hierarchical Bayesian approach to integrate field data with process-based models, yielding posterior distributions for model parameters, including the duration of the ecological memory (integration period) and the relative importance of past events (weights) to this memory. We apply our approach to data spanning diverse temporal scales and four semiarid sites in the western US: leaf-level stomatal conductance (gs, sub-hourly scale), soil respiration (Rs, hourly to daily scale), and net primary productivity (NPP) and tree-ring widths (annual scale). For gs, antecedent factors (daily rainfall and temperature, hourly vapor pressure deficit) and current soil water explained up to 72% of the variation in gs in the Chihuahuan Desert, with a memory of 10 hours for a grass and 4 days for a shrub. Antecedent factors (past soil water, temperature, photosynthesis rates) explained 73-80% of the variation in sub-daily and daily Rs. Rs beneath shrubs had a moisture and temperature memory of a few weeks, while Rs in open space and beneath grasses had a memory of 6 weeks. For pinyon pine ring widths, the current and previous year accounted for 85% of the precipitation memory; for the current year, precipitation received between February and June was most important. A similar result emerged for NPP in the short grass steppe. In both sites, tree growth and NPP had a memory of 3 years such that precipitation received >3 years ago had little influence. Understanding ecosystem dynamics requires knowledge of the temporal scales over which environmental factors influence ecological processes, and our approach to quantifying ecological memory provides a means to identify underlying mechanisms.

  19. Multifractal Approach to Time Clustering of Earthquakes. Application to Mt. Vesuvio Seismicity

    NASA Astrophysics Data System (ADS)

    Codano, C.; Alonzo, M. L.; Vilardo, G.

    The clustering structure of the Vesuvian earthquakes occurring is investigated by means of statistical tools: the inter-event time distribution, the running mean and the multifractal analysis. The first cannot clearly distinguish between a Poissonian process and a clustered one due to the difficulties of clearly distinguishing between an exponential distribution and a power law one. The running mean test reveals the clustering of the earthquakes, but looses information about the structure of the distribution at global scales. The multifractal approach can enlighten the clustering at small scales, while the global behaviour remains Poissonian. Subsequently the clustering of the events is interpreted in terms of diffusive processes of the stress in the earth crust.

  20. Estimation of viscoelastic shear properties of vocal-fold tissues based on time-temperature superposition.

    PubMed

    Chan, R W

    2001-09-01

    Empirical data on the viscoelastic shear properties of human vocal-fold mucosa (cover) were recently reported at relatively low frequency (0.01-15 Hz). For the data to become relevant to voice production, attempts have been made to parametrize and extrapolate the data to higher frequencies using constitutive modeling [Chan and Titze, J. Acoust. Soc. Am. 107, 565-580 (2000)]. This study investigated the feasibility of an alternative approach for data extrapolation, namely the principle of time-temperature superposition (TTS). TTS is a hybrid theoretical-empirical approach widely used by rheologists to estimate the viscoelastic properties of polymeric systems at time or frequency scales not readily accessible experimentally. It is based on the observation that for many polymers, the molecular configurational changes that occur in a given time scale at a low temperature correspond to those that occur in a shorter time scale at a higher temperature. Using a rotational rheometer, the elastic shear modulus (G') and viscous shear modulus (G'') of vocal-fold cover (superficial layer of lamina propria) tissue samples were measured at 0.01-15 Hz at relatively low temperatures (5 degrees-37 degrees C). Data were empirically shifted according to TTS, yielding composite "master curves" for predicting the magnitude of the shear moduli at higher frequencies at 37 degrees C. Results showed that TTS may be a feasible approach for estimating the viscoelastic shear properties of vocal-fold tissues at frequencies of phonation (on the order of 100-1000 Hz).

  1. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  2. Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches

    PubMed Central

    Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand

    2018-01-01

    Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086

  3. Scale relativity: from quantum mechanics to chaotic dynamics.

    NASA Astrophysics Data System (ADS)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  4. Disentangling WTP per QALY data: different analytical approaches, different answers.

    PubMed

    Gyrd-Hansen, Dorte; Kjaer, Trine

    2012-03-01

    A large random sample of the Danish general population was asked to value health improvements by way of both the time trade-off elicitation technique and willingness-to-pay (WTP) using contingent valuation methods. The data demonstrate a high degree of heterogeneity across respondents in their relative valuations on the two scales. This has implications for data analysis. We show that the estimates of WTP per QALY are highly sensitive to the analytical strategy. For both open-ended and dichotomous choice data we demonstrate that choice of aggregated approach (ratios of means) or disaggregated approach (means of ratios) affects estimates markedly as does the interpretation of the constant term (which allows for disproportionality across the two scales) in the regression analyses. We propose that future research should focus on why some respondents are unwilling to trade on the time trade-off scale, on how to interpret the constant value in the regression analyses, and on how best to capture the heterogeneity in preference structures when applying mixed multinomial logit. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Multi-fluid Dynamics for Supersonic Jet-and-Crossflows and Liquid Plug Rupture

    NASA Astrophysics Data System (ADS)

    Hassan, Ezeldin A.

    Multi-fluid dynamics simulations require appropriate numerical treatments based on the main flow characteristics, such as flow speed, turbulence, thermodynamic state, and time and length scales. In this thesis, two distinct problems are investigated: supersonic jet and crossflow interactions; and liquid plug propagation and rupture in an airway. Gaseous non-reactive ethylene jet and air crossflow simulation represents essential physics for fuel injection in SCRAMJET engines. The regime is highly unsteady, involving shocks, turbulent mixing, and large-scale vortical structures. An eddy-viscosity-based multi-scale turbulence model is proposed to resolve turbulent structures consistent with grid resolution and turbulence length scales. Predictions of the time-averaged fuel concentration from the multi-scale model is improved over Reynolds-averaged Navier-Stokes models originally derived from stationary flow. The response to the multi-scale model alone is, however, limited, in cases where the vortical structures are small and scattered thus requiring prohibitively expensive grids in order to resolve the flow field accurately. Statistical information related to turbulent fluctuations is utilized to estimate an effective turbulent Schmidt number, which is shown to be highly varying in space. Accordingly, an adaptive turbulent Schmidt number approach is proposed, by allowing the resolved field to adaptively influence the value of turbulent Schmidt number in the multi-scale turbulence model. The proposed model estimates a time-averaged turbulent Schmidt number adapted to the computed flowfield, instead of the constant value common to the eddy-viscosity-based Navier-Stokes models. This approach is assessed using a grid-refinement study for the normal injection case, and tested with 30 degree injection, showing improved results over the constant turbulent Schmidt model both in mean and variance of fuel concentration predictions. For the incompressible liquid plug propagation and rupture study, numerical simulations are conducted using an Eulerian-Lagrangian approach with a continuous-interface method. A reconstruction scheme is developed to allow topological changes during plug rupture by altering the connectivity information of the interface mesh. Rupture time is shown to be delayed as the initial precursor film thickness increases. During the plug rupture process, a sudden increase of mechanical stresses on the tube wall is recorded, which can cause tissue damage.

  6. Maximum one-shot dissipated work from Rényi divergences

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  7. Maximum one-shot dissipated work from Rényi divergences.

    PubMed

    Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  8. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  9. Estimating time-dependent connectivity in marine systems

    USGS Publications Warehouse

    Defne, Zafer; Ganju, Neil K.; Aretxabaleta, Alfredo

    2016-01-01

    Hydrodynamic connectivity describes the sources and destinations of water parcels within a domain over a given time. When combined with biological models, it can be a powerful concept to explain the patterns of constituent dispersal within marine ecosystems. However, providing connectivity metrics for a given domain is a three-dimensional problem: two dimensions in space to define the sources and destinations and a time dimension to evaluate connectivity at varying temporal scales. If the time scale of interest is not predefined, then a general approach is required to describe connectivity over different time scales. For this purpose, we have introduced the concept of a “retention clock” that highlights the change in connectivity through time. Using the example of connectivity between protected areas within Barnegat Bay, New Jersey, we show that a retention clock matrix is an informative tool for multitemporal analysis of connectivity.

  10. Multiscale decoding for reliable brain-machine interface performance over time.

    PubMed

    Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M

    2017-07-01

    Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.

  11. The effects of different dry roast parameters on peanut quality using an industrial, belt-type roaster simulator

    USDA-ARS?s Scientific Manuscript database

    Recent lab scale experiments demonstrated that peanuts roasted to equivalent surface colors at different temperature/time combinations can vary substantially in chemical and physical properties related to product quality. This study expanded that approach to a pilot plant scale roaster that simulate...

  12. Validating the use of MODIS time series for salinity assessment over agricultural soils in California, USA

    USDA-ARS?s Scientific Manuscript database

    Testing soil salinity assessment methodologies over different regions is important for future continental and global scale applications. A novel regional-scale soil salinity modeling approach using plant-performance metrics was proposed by Zhang et al. (2015) for farmland in the Yellow River Delta, ...

  13. Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Matthaeus, William H.

    1990-01-01

    A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.

  14. Upscaling heterogeneity in aquifer reactivity via exposure-time concept: forward model.

    PubMed

    Seeboonruang, Uma; Ginn, Timothy R

    2006-03-20

    Reactive properties of aquifer solid phase materials play an important role in solute fate and transport in the natural subsurface on time scales ranging from years in contaminant remediation to millennia in dynamics of aqueous geochemistry. Quantitative tools for dealing with the impact of natural heterogeneity in solid phase reactivity on solute fate and transport are limited. Here we describe the use of a structural variable to keep track of solute flux exposure to reactive surfaces. With this approach, we develop a non-reactive tracer model that is useful for determining the signature of multi-scale reactive solid heterogeneity in terms of solute flux distributions at the field scale, given realizations of three-dimensional reactive site density fields. First, a governing Eulerian equation for the non-reactive tracer model is determined by an upscaling technique in which it is found that the exposure time of solution to reactive surface areas evolves via both a macroscopic velocity and a macroscopic dispersion in the artificial dimension of exposure time. Second, we focus on the Lagrangian approach in the context of a streamtube ensemble and demonstrate the use of the distribution of solute flux over the exposure time dimension in modeling two-dimensional transport of a solute undergoing simplified linear reversible reactions, in hypothetical conditions following prior laboratory experiments. The distribution of solute flux over exposure time in a given case is a signature of the impact of heterogeneous aquifer reactivity coupled with a particular physical heterogeneity, boundary conditions, and hydraulic gradient. Rigorous application of this approach in a simulation sense is limited here to linear kinetically controlled reactions.

  15. Revisiting the time until fixation of a neutral mutant in a finite population - A coalescent theory approach.

    PubMed

    Greenbaum, Gili

    2015-09-07

    Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Static and Dynamic Frequency Scaling on Multicore CPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Hong, Changwan; Chunduri, Sudheer

    2016-12-28

    Dynamic voltage and frequency scaling (DVFS) adapts CPU power consumption by modifying a processor’s operating frequency (and the associated voltage). Typical approaches employing DVFS involve default strategies such as running at the lowest or the highest frequency, or observing the CPU’s runtime behavior and dynamically adapting the voltage/frequency configuration based on CPU usage. In this paper, we argue that many previous approaches suffer from inherent limitations, such as not account- ing for processor-specific impact of frequency changes on energy for different workload types. We first propose a lightweight runtime-based approach to automatically adapt the frequency based on the CPU workload,more » that is agnostic of the processor characteristics. We then show that further improvements can be achieved for affine kernels in the application, using a compile-time characterization instead of run-time monitoring to select the frequency and number of CPU cores to use. Our framework relies on a one-time energy characterization of CPU-specific DVFS profiles followed by a compile-time categorization of loop-based code segments in the application. These are combined to determine a priori of the frequency and the number of cores to use to execute the application so as to optimize energy or energy-delay product, outperforming runtime approach. Extensive evaluation on 60 benchmarks and five multi-core CPUs show that our approach systematically outperforms the powersave Linux governor, while improving overall performance.« less

  17. Scaling Watershed Models: Modern Approaches to Science Computation with MapReduce, Parallelization, and Cloud Optimization

    EPA Science Inventory

    Environmental models are products of the computer architecture and software tools available at the time of development. Scientifically sound algorithms may persist in their original state even as system architectures and software development approaches evolve and progress. Dating...

  18. The Trapping Index: How to integrate the Eulerian and the Lagrangian approach for the computation of the transport time scales of semi-enclosed basins.

    PubMed

    Cucco, Andrea; Umgiesser, Georg

    2015-09-15

    In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Use of soil moisture dynamics and patterns at different spatio-temporal scales for the investigation of subsurface flow processes

    NASA Astrophysics Data System (ADS)

    Blume, T.; Zehe, E.; Bronstert, A.

    2009-07-01

    Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and binary indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.

  20. Characteristic time scales for diffusion processes through layers and across interfaces

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.

    2018-04-01

    This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.

  1. Characteristic time scales for diffusion processes through layers and across interfaces.

    PubMed

    Carr, Elliot J

    2018-04-01

    This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.

  2. Time-dependent shock acceleration of particles. Effect of the time-dependent injection, with application to supernova remnants

    NASA Astrophysics Data System (ADS)

    Petruk, O.; Kopytko, B.

    2016-11-01

    Three approaches are considered to solve the equation which describes the time-dependent diffusive shock acceleration of test particles at the non-relativistic shocks. At first, the solution of Drury for the particle distribution function at the shock is generalized to any relation between the acceleration time-scales upstream and downstream and for the time-dependent injection efficiency. Three alternative solutions for the spatial dependence of the distribution function are derived. Then, the two other approaches to solve the time-dependent equation are presented, one of which does not require the Laplace transform. At the end, our more general solution is discussed, with a particular attention to the time-dependent injection in supernova remnants. It is shown that, comparing to the case with the dominant upstream acceleration time-scale, the maximum momentum of accelerated particles shifts towards the smaller momenta with increase of the downstream acceleration time-scale. The time-dependent injection affects the shape of the particle spectrum. In particular, (I) the power-law index is not solely determined by the shock compression, in contrast to the stationary solution; (II) the larger the injection efficiency during the first decades after the supernova explosion, the harder the particle spectrum around the high-energy cutoff at the later times. This is important, in particular, for interpretation of the radio and gamma-ray observations of supernova remnants, as demonstrated on a number of examples.

  3. Wavelet analysis and scaling properties of time series

    NASA Astrophysics Data System (ADS)

    Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  5. Physicochemical heterogeneity controls on uranium bioreduction rates at the field scale.

    PubMed

    Li, Li; Gawande, Nitin; Kowalsky, Michael B; Steefel, Carl I; Hubbard, Susan S

    2011-12-01

    It has been demonstrated in laboratory systems that U(VI) can be reduced to immobile U(IV) by bacteria in natural environments. The ultimate efficacy of bioreduction at the field scale, however, is often challenging to quantify and depends on site characteristics. In this work, uranium bioreduction rates at the field scale are quantified, for the first time, using an integrated approach. The approach combines field data, inverse and forward hydrological and reactive transport modeling, and quantification of reduction rates at different spatial scales. The approach is used to explore the impact of local scale (tens of centimeters) parameters and processes on field scale (tens of meters) system responses to biostimulation treatments and the controls of physicochemical heterogeneity on bioreduction rates. Using the biostimulation experiments at the Department of Energy Old Rifle site, our results show that the spatial distribution of hydraulic conductivity and solid phase mineral (Fe(III)) play a critical role in determining the field-scale bioreduction rates. Due to the dependence on Fe-reducing bacteria, field-scale U(VI) bioreduction rates were found to be largely controlled by the abundance of Fe(III) minerals at the vicinity of the injection wells and by the presence of preferential flow paths connecting injection wells to down gradient Fe(III) abundant areas.

  6. Concurrent systems and time synchronization

    NASA Astrophysics Data System (ADS)

    Burgin, Mark; Grathoff, Annette

    2018-05-01

    In the majority of scientific fields, system dynamics is described assuming existence of unique time for the whole system. However, it is established theoretically, for example, in relativity theory or in the system theory of time, and validated experimentally that there are different times and time scales in a variety of real systems - physical, chemical, biological, social, etc. In spite of this, there are no wide-ranging scientific approaches to exploration of such systems. Therefore, the goal of this paper is to study systems with this property. We call them concurrent systems because processes in them can go, events can happen and actions can be performed in different time scales. The problem of time synchronization is specifically explored.

  7. Real-time detection of antibiotic activity by measuring nanometer-scale bacterial deformation.

    PubMed

    Iriya, Rafael; Syal, Karan; Jing, Wenwen; Mo, Manni; Yu, Hui; Haydel, Shelley E; Wang, Shaopeng; Tao, Nongjian

    2017-12-01

    Diagnosing antibiotic-resistant bacteria currently requires sensitive detection of phenotypic changes associated with antibiotic action on bacteria. Here, we present an optical imaging-based approach to quantify bacterial membrane deformation as a phenotypic feature in real-time with a nanometer scale (∼9  nm) detection limit. Using this approach, we found two types of antibiotic-induced membrane deformations in different bacterial strains: polymyxin B induced relatively uniform spatial deformation of Escherichia coli O157:H7 cells leading to change in cellular volume and ampicillin-induced localized spatial deformation leading to the formation of bulges or protrusions on uropathogenic E. coli CFT073 cells. We anticipate that the approach will contribute to understanding of antibiotic phenotypic effects on bacteria with a potential for applications in rapid antibiotic susceptibility testing. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  8. Self-interaction-corrected time-dependent density-functional-theory calculations of x-ray-absorption spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, Guangde; Rinkevicius, Zilvinas; Vahtras, Olav

    We outline an approach within time-dependent density functional theory that predicts x-ray spectra on an absolute scale. The approach rests on a recent formulation of the resonant-convergent first-order polarization propagator [P. Norman et al., J. Chem. Phys. 123, 194103 (2005)] and corrects for the self-interaction energy of the core orbital. This polarization propagator approach makes it possible to directly calculate the x-ray absorption cross section at a particular frequency without explicitly addressing the excited-state spectrum. The self-interaction correction for the employed density functional accounts for an energy shift of the spectrum, and fully correlated absolute-scale x-ray spectra are thereby obtainedmore » based solely on optimization of the electronic ground state. The procedure is benchmarked against experimental spectra of a set of small organic molecules at the carbon, nitrogen, and oxygen K edges.« less

  9. Accounting for interannual variability: A comparison of options for water resources climate change impact assessments

    NASA Astrophysics Data System (ADS)

    Johnson, Fiona; Sharma, Ashish

    2011-04-01

    Empirical scaling approaches for constructing rainfall scenarios from general circulation model (GCM) simulations are commonly used in water resources climate change impact assessments. However, these approaches have a number of limitations, not the least of which is that they cannot account for changes in variability or persistence at annual and longer time scales. Bias correction of GCM rainfall projections offers an attractive alternative to scaling methods as it has similar advantages to scaling in that it is computationally simple, can consider multiple GCM outputs, and can be easily applied to different regions or climatic regimes. In addition, it also allows for interannual variability to evolve according to the GCM simulations, which provides additional scenarios for risk assessments. This paper compares two scaling and four bias correction approaches for estimating changes in future rainfall over Australia and for a case study for water supply from the Warragamba catchment, located near Sydney, Australia. A validation of the various rainfall estimation procedures is conducted on the basis of the latter half of the observational rainfall record. It was found that the method leading to the lowest prediction errors varies depending on the rainfall statistic of interest. The flexibility of bias correction approaches in matching rainfall parameters at different frequencies is demonstrated. The results also indicate that for Australia, the scaling approaches lead to smaller estimates of uncertainty associated with changes to interannual variability for the period 2070-2099 compared to the bias correction approaches. These changes are also highlighted using the case study for the Warragamba Dam catchment.

  10. The origins of modern biodiversity on land

    PubMed Central

    Benton, Michael J.

    2010-01-01

    Comparative studies of large phylogenies of living and extinct groups have shown that most biodiversity arises from a small number of highly species-rich clades. To understand biodiversity, it is important to examine the history of these clades on geological time scales. This is part of a distinct ‘phylogenetic expansion’ view of macroevolution, and contrasts with the alternative, non-phylogenetic ‘equilibrium’ approach to the history of biodiversity. The latter viewpoint focuses on density-dependent models in which all life is described by a single global-scale model, and a case is made here that this approach may be less successful at representing the shape of the evolution of life than the phylogenetic expansion approach. The terrestrial fossil record is patchy, but is adequate for coarse-scale studies of groups such as vertebrates that possess fossilizable hard parts. New methods in phylogenetic analysis, morphometrics and the study of exceptional biotas allow new approaches. Models for diversity regulation through time range from the entirely biotic to the entirely physical, with many intermediates. Tetrapod diversity has risen as a result of the expansion of ecospace, rather than niche subdivision or regional-scale endemicity resulting from continental break-up. Tetrapod communities on land have been remarkably stable and have changed only when there was a revolution in floras (such as the demise of the Carboniferous coal forests, or the Cretaceous radiation of angiosperms) or following particularly severe mass extinction events, such as that at the end of the Permian. PMID:20980315

  11. Catchment Storage and Transport on Timescales from Minutes to Millennia

    NASA Astrophysics Data System (ADS)

    Kirchner, J. W.

    2017-12-01

    Landscapes are characterized by preferential flow and pervasive heterogeneity on all scales. They therefore store and transmit water and solutes over a wide spectrum of time scales, with important implications for contaminant transport, weathering rates, and runoff chemistry. Theoretical analyses predict, and syntheses of age tracer data confirm, that waters in aquifers are older - often by orders of magnitude - than in the rivers that flow from them, and that this disconnect between water ages arises from aquifer heterogeneity. Recent theoretical studies also suggest that catchment transit time distributions are nonstationary, reflecting temporal variability in precipitation forcing, structural heterogeneity in catchments themselves, and the nonlinearity of the mechanisms controlling storage and transport in the subsurface. The challenge of empirically estimating these nonstationary transit time distributions in real-world catchments, however, has only begun to be explored. In recent years, long-term isotope time series have been collected in many research catchments, and new technologies have emerged that allow quasi-continuous measurements of isotopes in precipitation and streamflow. These new data streams create new opportunities to study how rainfall becomes streamflow following the onset of precipitation. Here I present novel methods for quantifying the fraction of current rainfall in streamflow across ensembles of precipitation events. Benchmark tests with nonstationary catchment models demonstrate that this approach quantitatively measures the short tail of the transit time distribution for a wide range of catchment response characteristics. In combination with reactive tracer time series, this approach can potentially be extended to measure short-term chemical reaction rates at the catchment scale. Applications using high-frequency tracer time series from several experimental catchments demonstrate the utility of the new approach outlined here.

  12. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. The Systems Revolution

    ERIC Educational Resources Information Center

    Ackoff, Russell L.

    1974-01-01

    The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…

  14. Putting scales into evolutionary time: the divergence of major scale insect lineages (Hemiptera) predates the radiation of modern angiosperm hosts

    PubMed Central

    Vea, Isabelle M.; Grimaldi, David A.

    2016-01-01

    The radiation of flowering plants in the mid-Cretaceous transformed landscapes and is widely believed to have fuelled the radiations of major groups of phytophagous insects. An excellent group to test this assertion is the scale insects (Coccomorpha: Hemiptera), with some 8,000 described Recent species and probably the most diverse fossil record of any phytophagous insect group preserved in amber. We used here a total-evidence approach (by tip-dating) employing 174 morphological characters of 73 Recent and 43 fossil taxa (48 families) and DNA sequences of three gene regions, to obtain divergence time estimates and compare the chronology of the most diverse lineage of scale insects, the neococcoid families, with the timing of the main angiosperm radiation. An estimated origin of the Coccomorpha occurred at the beginning of the Triassic, about 245 Ma [228–273], and of the neococcoids 60 million years later [210–165 Ma]. A total-evidence approach allows the integration of extinct scale insects into a phylogenetic framework, resulting in slightly younger median estimates than analyses using Recent taxa, calibrated with fossil ages only. From these estimates, we hypothesise that most major lineages of coccoids shifted from gymnosperms onto angiosperms when the latter became diverse and abundant in the mid- to Late Cretaceous. PMID:27000526

  15. A statistical characterization of the Galileo-to-GPS inter-system bias

    NASA Astrophysics Data System (ADS)

    Gioia, Ciro; Borio, Daniele

    2016-11-01

    Global navigation satellite system operates using independent time scales and thus inter-system time offsets have to be determined to enable multi-constellation navigation solutions. GPS/Galileo inter-system bias and drift are evaluated here using different types of receivers: two mass market and two professional receivers. Moreover, three different approaches are considered for the inter-system bias determination: in the first one, the broadcast Galileo to GPS time offset is used to align GPS and Galileo time scales. In the second, the inter-system bias is included in the multi-constellation navigation solution and is estimated using the measurements available. Finally, an enhanced algorithm using constraints on the inter-system bias time evolution is proposed. The inter-system bias estimates obtained with the different approaches are analysed and their stability is experimentally evaluated using the Allan deviation. The impact of the inter-system bias on the position velocity time solution is also considered and the performance of the approaches analysed is evaluated in terms of standard deviation and mean errors for both horizontal and vertical components. From the experiments, it emerges that the inter-system bias is very stable and that the use of constraints, modelling the GPS/Galileo inter-system bias behaviour, significantly improves the performance of multi-constellation navigation.

  16. Evaluating the status of individuals and populations: Advantages of multiple approaches and time scales: Chapter 6

    USGS Publications Warehouse

    Monson, Daniel H.; Bowen, Lizabeth

    2015-01-01

    Overall, a variety of indices used to measure population status throughout the sea otter’s range have provided insights for understanding the mechanisms driving the trajectory of various sea otter populations, which a single index could not, and we suggest using multiple methods to measure a population’s status at multiple spatial and temporal scales. The work described here also illustrates the usefulness of long-term data sets and/or approaches that can be used to assess population status retrospectively, providing information otherwise not available. While not all systems will be as amenable to using all the approaches presented here, we expect innovative researchers could adapt analogous multi-scale methods to a broad range of habitats and species including apex predators occupying the top trophic levels, which are often of conservation concern.

  17. Inflation in a Scale Invariant Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira, Pedro G.; Hill, Christopher T.; Noller, Johannes

    A scale-invariant universe can have a period of accelerated expansion at early times: inflation. We use a frame-invariant approach to calculate inflationary observables in a scale invariant theory of gravity involving two scalar fields - the spectral indices, the tensor to scalar ratio, the level of isocurvature modes and non-Gaussianity. We show that scale symmetry leads to an exact cancellation of isocurvature modes and that, in the scale-symmetry broken phase, this theory is well described by a single scalar field theory. We find the predictions of this theory strongly compatible with current observations.

  18. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  19. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms.

    PubMed

    Alsmadi, Othman M K; Abo-Hammour, Zaer S

    2015-01-01

    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  20. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    PubMed

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  1. Simulating mesoscale coastal evolution for decadal coastal management: A new framework integrating multiple, complementary modelling approaches

    NASA Astrophysics Data System (ADS)

    van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.

    2016-03-01

    Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner shelf settings. This vision is illustrated through an idealised composition of models for a ~ 70 km stretch of the Suffolk coast, eastern England. A key advantage of model linking is that it allows a wide range of real-world situations to be simulated from a small set of model components. However, this process involves more than just the development of software that allows for flexible model coupling. The compatibility of radically different modelling assumptions remains to be carefully assessed and testing as well as evaluating uncertainties of models in composition are areas that require further attention.

  2. Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.

    2003-01-01

    "Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.

  3. Simultaneous estimation of local-scale and flow path-scale dual-domain mass transfer parameters using geoelectrical monitoring

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.

    2013-01-01

    Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.

  4. Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.

    2015-12-01

    Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.

  5. Multiscale soil moisture estimates using static and roving cosmic-ray soil moisture sensors

    NASA Astrophysics Data System (ADS)

    McJannet, David; Hawdon, Aaron; Baker, Brett; Renzullo, Luigi; Searle, Ross

    2017-12-01

    Soil moisture plays a critical role in land surface processes and as such there has been a recent increase in the number and resolution of satellite soil moisture observations and the development of land surface process models with ever increasing resolution. Despite these developments, validation and calibration of these products has been limited because of a lack of observations on corresponding scales. A recently developed mobile soil moisture monitoring platform, known as the rover, offers opportunities to overcome this scale issue. This paper describes methods, results and testing of soil moisture estimates produced using rover surveys on a range of scales that are commensurate with model and satellite retrievals. Our investigation involved static cosmic-ray neutron sensors and rover surveys across both broad (36 × 36 km at 9 km resolution) and intensive (10 × 10 km at 1 km resolution) scales in a cropping district in the Mallee region of Victoria, Australia. We describe approaches for converting rover survey neutron counts to soil moisture and discuss the factors controlling soil moisture variability. We use independent gravimetric and modelled soil moisture estimates collected across both space and time to validate rover soil moisture products. Measurements revealed that temporal patterns in soil moisture were preserved through time and regression modelling approaches were utilised to produce time series of property-scale soil moisture which may also have applications in calibration and validation studies or local farm management. Intensive-scale rover surveys produced reliable soil moisture estimates at 1 km resolution while broad-scale surveys produced soil moisture estimates at 9 km resolution. We conclude that the multiscale soil moisture products produced in this study are well suited to future analysis of satellite soil moisture retrievals and finer-scale soil moisture models.

  6. Femtosecond parabolic pulse shaping in normally dispersive optical fibers.

    PubMed

    Sukhoivanov, Igor A; Iakushev, Sergii O; Shulika, Oleksiy V; Díez, Antonio; Andrés, Miguel

    2013-07-29

    Formation of parabolic pulses at femtosecond time scale by means of passive nonlinear reshaping in normally dispersive optical fibers is analyzed. Two approaches are examined and compared: the parabolic waveform formation in transient propagation regime and parabolic waveform formation in the steady-state propagation regime. It is found that both approaches could produce parabolic pulses as short as few hundred femtoseconds applying commercially available fibers, specially designed all-normal dispersion photonic crystal fiber and modern femtosecond lasers for pumping. The ranges of parameters providing parabolic pulse formation at the femtosecond time scale are found depending on the initial pulse duration, chirp and energy. Applicability of different fibers for femtosecond pulse shaping is analyzed. Recommendation for shortest parabolic pulse formation is made based on the analysis presented.

  7. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  8. DISCO: An object-oriented system for music composition and sound design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaper, H. G.; Tipei, S.; Wright, J. M.

    2000-09-05

    This paper describes an object-oriented approach to music composition and sound design. The approach unifies the processes of music making and instrument building by using similar logic, objects, and procedures. The composition modules use an abstract representation of musical data, which can be easily mapped onto different synthesis languages or a traditionally notated score. An abstract base class is used to derive classes on different time scales. Objects can be related to act across time scales, as well as across an entire piece, and relationships between similar objects can replicate traditional music operations or introduce new ones. The DISCO (Digitalmore » Instrument for Sonification and Composition) system is an open-ended work in progress.« less

  9. From Coexpression to Coregulation: An Approach to Inferring Transcriptional Regulation Among Gene Classes from Large-Scale Expression Data

    NASA Technical Reports Server (NTRS)

    Mjolsness, Eric; Castano, Rebecca; Mann, Tobias; Wold, Barbara

    2000-01-01

    We provide preliminary evidence that existing algorithms for inferring small-scale gene regulation networks from gene expression data can be adapted to large-scale gene expression data coming from hybridization microarrays. The essential steps are (I) clustering many genes by their expression time-course data into a minimal set of clusters of co-expressed genes, (2) theoretically modeling the various conditions under which the time-courses are measured using a continuous-time analog recurrent neural network for the cluster mean time-courses, (3) fitting such a regulatory model to the cluster mean time courses by simulated annealing with weight decay, and (4) analysing several such fits for commonalities in the circuit parameter sets including the connection matrices. This procedure can be used to assess the adequacy of existing and future gene expression time-course data sets for determining transcriptional regulatory relationships such as coregulation.

  10. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  11. Series Overview. Sustaining School Turnaround at Scale. Brief 1

    ERIC Educational Resources Information Center

    Education Resource Strategies, 2012

    2012-01-01

    Members of the non-profit organization Education Resource Strategies (ERS) have worked for over a decade with leaders of urban school systems to help them organize talent, time and technology to support great schools at scale. One year into the Federal program they are noticing significant differences in district turnaround approaches, engagement…

  12. Coordinated Approaches to Quantify Long-Term Ecosystem dynamics in Response to Global Change

    USDA-ARS?s Scientific Manuscript database

    Climate change and its impact on ecosystems are usually assessed at decadal and century time scales. Ecological responses to climate change at those scales are strongly regulated by long-term processes, such as changes in species composition, carbon dynamics in soil and by big trees, and nutrient r...

  13. Partitioning evapotranspiration using long-term carbon dioxide and water vapor fluxes: New approach to ET partitioning

    USDA-ARS?s Scientific Manuscript database

    The separate components of evapotranspiration (ET) provide critical information about the pathways and time scales over which water is returned to the atmosphere, but ecosystem-scale measurements of transpiration (T) and evaporation (E) remain elusive. We propose a novel determination of average E a...

  14. Scaling approach in predicting the seatbelt loading and kinematics of vulnerable occupants: How far can we go?

    PubMed

    Nie, Bingbing; Forman, Jason L; Joodaki, Hamed; Wu, Taotao; Kent, Richard W

    2016-09-01

    Occupants with extreme body size and shape, such as the small female or the obese, were reported to sustain high risk of injury in motor vehicle crashes (MVCs). Dimensional scaling approaches are widely used in injury biomechanics research based on the assumption of geometrical similarity. However, its application scope has not been quantified ever since. The objective of this study is to demonstrate the valid range of scaling approaches in predicting the impact response of the occupants with focus on the vulnerable populations. The present analysis was based on a data set consisting of 60 previously reported frontal crash tests in the same sled buck representing a typical mid-size passenger car. The tests included two categories of human surrogates: 9 postmortem human surrogates (PMHS) of different anthropometries (stature range: 147-189 cm; weight range: 27-151 kg) and 5 anthropomorphic test devices (ATDs). The impact response was considered including the restraint loads and the kinematics of multiple body segments. For each category of the human surrogates, a mid-size occupant was selected as a baseline and the impact response was scaled specifically to another subject based on either the body mass (body shape) or stature (the overall body size). To identify the valid range of the scaling approach, the scaled response was compared to the experimental results using assessment scores on the peak value, peak timing (the time when the peak value occurred), and the overall curve shape ranging from 0 (extremely poor) to 1 (perfect match). Scores of 0.7 to 0.8 and 0.8 to 1.0 indicate fair and acceptable prediction. For both ATDs and PMHS, the scaling factor derived from body mass proved an overall good predictor of the peak timing for the shoulder belt (0.868, 0.829) and the lap belt (0.858, 0.774) and for the peak value of the lap belt force (0.796, 0.869). Scaled kinematics based on body stature provided fair or acceptable prediction on the overall head/shoulder kinematics (0.741, 0.822 for the head; 0.817, 0.728 for the shoulder) regardless of the anthropometry. The scaling approach exhibited poor prediction capability on the curve shape for the restraint force (0.494 and 0.546 for the shoulder belt; 0.585 and 0.530 for the lap belt). It also cannot well predict the excursion of the pelvis and the knee. The results revealed that for the peak lap belt force and the forward motion of the head and shoulder, the underlying linear relationship with body size and shape is valid over a wide anthropometric range. The chaotic nature of the dynamic response cannot be fully recovered by the assumption of the whole-body geometrical similarity, especially for the curve shape. The valid range of the scaling approach established in this study can be reasonably referenced in predicting the impact response of a given specific population with expected deviation. Application of this knowledge also includes proposing strategies for restraint configuration and providing reference for ATD and/or human body model (HBM) development for vulnerable occupants.

  15. Bilateral robotic priming before task-oriented approach in subacute stroke rehabilitation: a pilot randomized controlled trial.

    PubMed

    Hsieh, Yu-Wei; Wu, Ching-Yi; Wang, Wei-En; Lin, Keh-Chung; Chang, Ku-Chou; Chen, Chih-Chi; Liu, Chien-Ting

    2017-02-01

    To investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke. A randomized controlled trial. Occupational therapy clinics in medical centers. Thirty-one subacute stroke patients were recruited. Participants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device. Motor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale. The primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group. Bilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.

  16. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamontov, Eugene; Zolnierczuk, Piotr A.; Ohl, Michael E.

    Using neutron spin-echo and backscattering spectroscopy, we have found that at low temperatures water molecules in an aqueous solution engage in center-of-mass dynamics that are different from both the main structural relaxations and the well-known localized motions in the transient cages of the nearest neighbor molecules. While the latter localized motions are known to take place on the picosecond time scale and Angstrom length scale, the slower motions that we have observed are found on the nanosecond time scale and nanometer length scale. They are associated with the slow secondary relaxations, or excess wing dynamics, in glass-forming liquids. Our approach,more » therefore, can be applied to probe the characteristic length scale of the dynamic entities associated with slow dynamics in glass-forming liquids, which presently cannot be studied by other experimental techniques.« less

  18. Statistical physics approaches to financial fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong

    2009-12-01

    Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.

  19. Use of soil moisture dynamics and patterns for the investigation of runoff generation processes with emphasis on preferential flow

    NASA Astrophysics Data System (ADS)

    Blume, T.; Zehe, E.; Bronstert, A.

    2007-08-01

    Spatial patterns as well as temporal dynamics of soil moisture have a major influence on runoff generation. The investigation of these dynamics and patterns can thus yield valuable information on hydrological processes, especially in data scarce or previously ungauged catchments. The combination of spatially scarce but temporally high resolution soil moisture profiles with episodic and thus temporally scarce moisture profiles at additional locations provides information on spatial as well as temporal patterns of soil moisture at the hillslope transect scale. This approach is better suited to difficult terrain (dense forest, steep slopes) than geophysical techniques and at the same time less cost-intensive than a high resolution grid of continuously measuring sensors. Rainfall simulation experiments with dye tracers while continuously monitoring soil moisture response allows for visualization of flow processes in the unsaturated zone at these locations. Data was analyzed at different spacio-temporal scales using various graphical methods, such as space-time colour maps (for the event and plot scale) and indicator maps (for the long-term and hillslope scale). Annual dynamics of soil moisture and decimeter-scale variability were also investigated. The proposed approach proved to be successful in the investigation of flow processes in the unsaturated zone and showed the importance of preferential flow in the Malalcahuello Catchment, a data-scarce catchment in the Andes of Southern Chile. Fast response times of stream flow indicate that preferential flow observed at the plot scale might also be of importance at the hillslope or catchment scale. Flow patterns were highly variable in space but persistent in time. The most likely explanation for preferential flow in this catchment is a combination of hydrophobicity, small scale heterogeneity in rainfall due to redistribution in the canopy and strong gradients in unsaturated conductivities leading to self-reinforcing flow paths.

  20. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.

  1. Small-scale anomaly detection in panoramic imaging using neural models of low-level vision

    NASA Astrophysics Data System (ADS)

    Casey, Matthew C.; Hickman, Duncan L.; Pavlou, Athanasios; Sadler, James R. E.

    2011-06-01

    Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as little as 3 pixels wide while filtering small-scale noise.

  2. Understanding and Managing the Assessment Process

    Treesearch

    Gene Lessard; Scott Archer; John R. Probst; Sandra Clark

    1999-01-01

    Taking an ecological approach to management, or ecosystem management, is a developing approach for managing natural resources within the context of large geogaphic scales and over multiple time frames. Recently, the Council on Environmental Quality (CEQ) (IEMTF 1995) defined an ecosystem as "...an interconnected community of living things, including humans, and...

  3. A multi-scale approach to designing therapeutics for tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  4. A multi-scale approach to designing therapeutics for tuberculosis

    DOE PAGES

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...

    2015-04-20

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  5. Parallel multispot smFRET analysis using an 8-pixel SPAD array

    NASA Astrophysics Data System (ADS)

    Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.

    2012-02-01

    Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.

  6. Parametric motion control of robotic arms: A biologically based approach using neural networks

    NASA Technical Reports Server (NTRS)

    Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.

    1993-01-01

    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.

  7. Supporting the growth of peer-professional workforces in healthcare settings: an evaluation of a targeted training approach for volunteer leaders of the STEPS Program.

    PubMed

    Turner, Benjamin; Kennedy, Areti; Kendall, Melissa; Muenchberger, Heidi

    2014-01-01

    To examine the effectiveness of a targeted training approach to foster and support a peer-professional workforce in the delivery of a community rehabilitation program for adults with acquired brain injury (ABI) and their families. A prospective longitudinal design was used to evaluate the effectiveness of a targeted two-day training forum for peer (n = 25) and professional (n = 15) leaders of the Skills to Enable People and Communities Program. Leaders completed a set of questionnaires (General Self-Efficacy Scale - GSES, Rosenberg Self-Esteem Scale, Volunteer Motivation Inventory - VMI and Community Involvement Scale - CIS) both prior to and immediately following the forum. Data analysis entailed paired sample t-test to explore changes in scores over time, and independent sample t-tests for comparisons between the two participant groups. The results indicated a significant increase in scores over time for the GSES (p = 0.047). Improvements in leaders' volunteer motivations and community involvement were also observed between the two time intervals. The between group comparisons highlighted that the peer leader group scored significantly higher than the professional leader group on the CIS and several domains of the VMI at both time intervals. The study provides an enhanced understanding of the utility of innovative workforce solutions for community rehabilitation after ABI; and further highlights the benefits of targeted training approaches to support the development of such workforce configurations.

  8. Improving the distinguishable cluster results: spin-component scaling

    NASA Astrophysics Data System (ADS)

    Kats, Daniel

    2018-06-01

    The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.

  9. Determination of functional collective motions in a protein at atomic resolution using coherent neutron scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Liang; Jain, Nitin; Cheng, Xiaolin

    Protein function often depends on global, collective internal motions. However, the simultaneous quantitative experimental determination of the forms, amplitudes, and time scales of these motions has remained elusive. We demonstrate that a complete description of these large-scale dynamic modes can be obtained using coherent neutron-scattering experiments on perdeuterated samples. With this approach, a microscopic relationship between the structure, dynamics, and function in a protein, cytochrome P450cam, is established. The approach developed here should be of general applicability to protein systems.

  10. Determination of functional collective motions in a protein at atomic resolution using coherent neutron scattering

    DOE PAGES

    Hong, Liang; Jain, Nitin; Cheng, Xiaolin; ...

    2016-10-14

    Protein function often depends on global, collective internal motions. However, the simultaneous quantitative experimental determination of the forms, amplitudes, and time scales of these motions has remained elusive. We demonstrate that a complete description of these large-scale dynamic modes can be obtained using coherent neutron-scattering experiments on perdeuterated samples. With this approach, a microscopic relationship between the structure, dynamics, and function in a protein, cytochrome P450cam, is established. The approach developed here should be of general applicability to protein systems.

  11. Continent-scale global change attribution in European birds - combining annual and decadal time scales.

    PubMed

    Jørgensen, Peter Søgaard; Böhning-Gaese, Katrin; Thorup, Kasper; Tøttrup, Anders P; Chylarecki, Przemysław; Jiguet, Frédéric; Lehikoinen, Aleksi; Noble, David G; Reif, Jiri; Schmid, Hans; van Turnhout, Chris; Burfield, Ian J; Foppen, Ruud; Voříšek, Petr; van Strien, Arco; Gregory, Richard D; Rahbek, Carsten

    2016-02-01

    Species attributes are commonly used to infer impacts of environmental change on multiyear species trends, e.g. decadal changes in population size. However, by themselves attributes are of limited value in global change attribution since they do not measure the changing environment. A broader foundation for attributing species responses to global change may be achieved by complementing an attributes-based approach by one estimating the relationship between repeated measures of organismal and environmental changes over short time scales. To assess the benefit of this multiscale perspective, we investigate the recent impact of multiple environmental changes on European farmland birds, here focusing on climate change and land use change. We analyze more than 800 time series from 18 countries spanning the past two decades. Analysis of long-term population growth rates documents simultaneous responses that can be attributed to both climate change and land-use change, including long-term increases in populations of hot-dwelling species and declines in long-distance migrants and farmland specialists. In contrast, analysis of annual growth rates yield novel insights into the potential mechanisms driving long-term climate induced change. In particular, we find that birds are affected by winter, spring, and summer conditions depending on the distinct breeding phenology that corresponds to their migratory strategy. Birds in general benefit from higher temperatures or higher primary productivity early on or in the peak of the breeding season with the largest effect sizes observed in cooler parts of species' climatic ranges. Our results document the potential of combining time scales and integrating both species attributes and environmental variables for global change attribution. We suggest such an approach will be of general use when high-resolution time series are available in large-scale biodiversity surveys. © 2015 John Wiley & Sons Ltd.

  12. Characteristic dynamics near two coalescing eigenvalues incorporating continuum threshold effects

    NASA Astrophysics Data System (ADS)

    Garmon, Savannah; Ordonez, Gonzalo

    2017-06-01

    It has been reported in the literature that the survival probability P(t) near an exceptional point where two eigenstates coalesce should generally exhibit an evolution P (t ) ˜t2e-Γ t, in which Γ is the decay rate of the coalesced eigenstate; this has been verified in a microwave billiard experiment [B. Dietz et al., Phys. Rev. E 75, 027201 (2007)]. However, the heuristic effective Hamiltonian that is usually employed to obtain this result ignores the possible influence of the continuum threshold on the dynamics. By contrast, in this work we employ an analytical approach starting from the microscopic Hamiltonian representing two simple models in order to show that the continuum threshold has a strong influence on the dynamics near exceptional points in a variety of circumstances. To report our results, we divide the exceptional points in Hermitian open quantum systems into two cases: at an EP2A two virtual bound states coalesce before forming a resonance, anti-resonance pair with complex conjugate eigenvalues, while at an EP2B two resonances coalesce before forming two different resonances. For the EP2B, which is the case studied in the microwave billiard experiment, we verify that the survival probability exhibits the previously reported modified exponential decay on intermediate time scales, but this is replaced with an inverse power law on very long time scales. Meanwhile, for the EP2A the influence from the continuum threshold is so strong that the evolution is non-exponential on all time scales and the heuristic approach fails completely. When the EP2A appears very near the threshold, we obtain the novel evolution P (t ) ˜1 -C1√{t } on intermediate time scales, while further away the parabolic decay (Zeno dynamics) on short time scales is enhanced.

  13. FOREWORD: IV International Time-Scale Algorithms Symposium, BIPM, Sèvres, 18-19 March 2002

    NASA Astrophysics Data System (ADS)

    Leschiutta, Sigfrido

    2003-06-01

    Time-scale formation, along with atomic time/frequency standards and time comparison techniques, is one of the three basic ingredients of Time Metrology. Before summarizing this Symposium and the relevant outcomes, let me make a couple of very general remarks. Clocks and comparison methods have today reached a very high level of accuracy: the nanosecond level. Some applications in the real word are now challenging the capacity of the National Metrological Laboratories. It is therefore essential that the algorithms dealing with clocks and comparison techniques should be such as to make the most of existing technologies. The comfortable margin of accuracy we were used to, between Laboratories and the Field, is gone forever. While clock makers and time-comparison experts meet regularly (FCS, PTTI, EFTF, CPEM, URSI, UIT, etc), the somewhat secluded community of experts in time-scale formation lacks a similar point of contact, with the exception of the CCTF meeting. This venue must consequently be welcomed. Let me recall some highlights from this Symposium: there were about 60 attendees from 15 nations, plus international institutions, such as the host BIPM, and a supranational one, ESA. About 30 papers, prepared in some 20 laboratories, were received: among these papers, four tutorials were offered; descriptions of local time scales including the local algorithms were presented; four papers considered the algorithms applied to the results of time-comparison methods; and six papers covered the special requirements of some specialized time-scale 'users'. The four basic ingredients of time-scale formation: models, noise, filtering and steering, received attention and were also discussed, not just during the sessions. The most demanding applications for time scales now come from Global Navigation Satellite systems; in six papers the progress of some programmes was described and the present and future needs were presented and documented. The lively discussion on future navigation systems led to the following four points: an overall accuracy in timing of one nanosecond is a must; the combined 'clock and orbit' effects on the knowledge of satellite position should be less than one metre; a combined solution for positioning and timing should be pursued; a 'new' time window (2 h to 4 h) emerged, in which the accuracy and stability parameters of the clocks forming a time scale for space application are to be optimized. That interval is linked to some criteria and methods for on-board clock corrections. A revival of interest in the time-proven Kalman filter was noted; in the course of a tutorial on past experience, a number of new approaches were discussed. Some further research is in order, but one should heed the comment: 'do not ask too much of a filter'. The Kalman approach is indeed powerful in combining sets of different data, provided that the possible problems of convergence are suitably addressed. Attention was also focused on the possibility of becoming victims of ever-present 'hidden' correlations. The TAI algorithm, ALGOS, is about 30 years old and the fundamental approach remains unchanged and unchallenged. A number of small refinements, all justified, were introduced in the 'constants' and parameters, but the general philosophy holds. In so far as the BIPM Time Section and the CCTF Working Group on Algorithms are concerned, on the basis of the outcome of this Symposium it is clear that they should follow the evolution of TAI and suggest any appropriate action to the CCTF. This Symposium, which gathered the world experts on T/F algorithms in Paris for two days, offered a wonderful opportunity for cross-fertilization between researchers operating in different and interdependent communities that are loosely connected. Thanks are due to Felicitas Arias, Demetrios Matsakis and Patrizia Tavella and their host organizations for having provided the community with this learning experience. One last comment: please do not wait another 14 years for the next Time Scale Algorithm Symposium.

  14. Multi-level molecular modelling for plasma medicine

    NASA Astrophysics Data System (ADS)

    Bogaerts, Annemie; Khosravian, Narjes; Van der Paal, Jonas; Verlackt, Christof C. W.; Yusupov, Maksudbek; Kamaraj, Balu; Neyts, Erik C.

    2016-02-01

    Modelling at the molecular or atomic scale can be very useful for obtaining a better insight in plasma medicine. This paper gives an overview of different atomic/molecular scale modelling approaches that can be used to study the direct interaction of plasma species with biomolecules or the consequences of these interactions for the biomolecules on a somewhat longer time-scale. These approaches include density functional theory (DFT), density functional based tight binding (DFTB), classical reactive and non-reactive molecular dynamics (MD) and united-atom or coarse-grained MD, as well as hybrid quantum mechanics/molecular mechanics (QM/MM) methods. Specific examples will be given for three important types of biomolecules, present in human cells, i.e. proteins, DNA and phospholipids found in the cell membrane. The results show that each of these modelling approaches has its specific strengths and limitations, and is particularly useful for certain applications. A multi-level approach is therefore most suitable for obtaining a global picture of the plasma-biomolecule interactions.

  15. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  16. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  17. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  18. Regression-Based Identification of Behavior-Encoding Neurons During Large-Scale Optical Imaging of Neural Activity at Cellular Resolution

    PubMed Central

    Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre

    2011-01-01

    The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686

  19. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed Central

    Palmer, T. N.

    2014-01-01

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic–dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only. PMID:24842038

  20. More reliable forecasts with less precise computations: a fast-track route to cloud-resolved weather and climate simulators?

    PubMed

    Palmer, T N

    2014-06-28

    This paper sets out a new methodological approach to solving the equations for simulating and predicting weather and climate. In this approach, the conventionally hard boundary between the dynamical core and the sub-grid parametrizations is blurred. This approach is motivated by the relatively shallow power-law spectrum for atmospheric energy on scales of hundreds of kilometres and less. It is first argued that, because of this, the closure schemes for weather and climate simulators should be based on stochastic-dynamic systems rather than deterministic formulae. Second, as high-wavenumber elements of the dynamical core will necessarily inherit this stochasticity during time integration, it is argued that the dynamical core will be significantly over-engineered if all computations, regardless of scale, are performed completely deterministically and if all variables are represented with maximum numerical precision (in practice using double-precision floating-point numbers). As the era of exascale computing is approached, an energy- and computationally efficient approach to cloud-resolved weather and climate simulation is described where determinism and numerical precision are focused on the largest scales only.

  1. Molecular dynamics on diffusive time scales from the phase-field-crystal equation.

    PubMed

    Chan, Pak Yuen; Goldenfeld, Nigel; Dantzig, Jon

    2009-03-01

    We extend the phase-field-crystal model to accommodate exact atomic configurations and vacancies by requiring the order parameter to be non-negative. The resulting theory dictates the number of atoms and describes the motion of each of them. By solving the dynamical equation of the model, which is a partial differential equation, we are essentially performing molecular dynamics simulations on diffusive time scales. To illustrate this approach, we calculate the two-point correlation function of a fluid.

  2. A Multi-Scale Distribution Model for Non-Equilibrium Populations Suggests Resource Limitation in an Endangered Rodent

    PubMed Central

    Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.

    2014-01-01

    Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807

  3. Large-scale road safety programmes in low- and middle-income countries: an opportunity to generate evidence.

    PubMed

    Hyder, Adnan A; Allen, Katharine A; Peters, David H; Chandran, Aruna; Bishai, David

    2013-01-01

    The growing burden of road traffic injuries, which kill over 1.2 million people yearly, falls mostly on low- and middle-income countries (LMICs). Despite this, evidence generation on the effectiveness of road safety interventions in LMIC settings remains scarce. This paper explores a scientific approach for evaluating road safety programmes in LMICs and introduces such a road safety multi-country initiative, the Road Safety in 10 Countries Project (RS-10). By building on existing evaluation frameworks, we develop a scientific approach for evaluating large-scale road safety programmes in LMIC settings. This also draws on '13 lessons' of large-scale programme evaluation: defining the evaluation scope; selecting study sites; maintaining objectivity; developing an impact model; utilising multiple data sources; using multiple analytic techniques; maximising external validity; ensuring an appropriate time frame; the importance of flexibility and a stepwise approach; continuous monitoring; providing feedback to implementers, policy-makers; promoting the uptake of evaluation results; and understanding evaluation costs. The use of relatively new approaches for evaluation of real-world programmes allows for the production of relevant knowledge. The RS-10 project affords an important opportunity to scientifically test these approaches for a real-world, large-scale road safety evaluation and generate new knowledge for the field of road safety.

  4. Next Generation Extended Lagrangian Quantum-based Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Negre, Christian

    2017-06-01

    A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.

  5. Scaling Relations and Self-Similarity of 3-Dimensional Reynolds-Averaged Navier-Stokes Equations.

    PubMed

    Ercan, Ali; Kavvas, M Levent

    2017-07-25

    Scaling conditions to achieve self-similar solutions of 3-Dimensional (3D) Reynolds-Averaged Navier-Stokes Equations, as an initial and boundary value problem, are obtained by utilizing Lie Group of Point Scaling Transformations. By means of an open-source Navier-Stokes solver and the derived self-similarity conditions, we demonstrated self-similarity within the time variation of flow dynamics for a rigid-lid cavity problem under both up-scaled and down-scaled domains. The strength of the proposed approach lies in its ability to consider the underlying flow dynamics through not only from the governing equations under consideration but also from the initial and boundary conditions, hence allowing to obtain perfect self-similarity in different time and space scales. The proposed methodology can be a valuable tool in obtaining self-similar flow dynamics under preferred level of detail, which can be represented by initial and boundary value problems under specific assumptions.

  6. Complementary approaches to diagnosing marine diseases: a union of the modern and the classic

    PubMed Central

    Burge, Colleen A.; Friedman, Carolyn S.; Getchell, Rodman; House, Marcia; Mydlarz, Laura D.; Prager, Katherine C.; Renault, Tristan; Kiryu, Ikunari; Vega-Thurber, Rebecca

    2016-01-01

    Linking marine epizootics to a specific aetiology is notoriously difficult. Recent diagnostic successes show that marine disease diagnosis requires both modern, cutting-edge technology (e.g. metagenomics, quantitative real-time PCR) and more classic methods (e.g. transect surveys, histopathology and cell culture). Here, we discuss how this combination of traditional and modern approaches is necessary for rapid and accurate identification of marine diseases, and emphasize how sole reliance on any one technology or technique may lead disease investigations astray. We present diagnostic approaches at different scales, from the macro (environment, community, population and organismal scales) to the micro (tissue, organ, cell and genomic scales). We use disease case studies from a broad range of taxa to illustrate diagnostic successes from combining traditional and modern diagnostic methods. Finally, we recognize the need for increased capacity of centralized databases, networks, data repositories and contingency plans for diagnosis and management of marine disease. PMID:26880839

  7. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  8. Complementary approaches to diagnosing marine diseases: a union of the modern and the classic

    USGS Publications Warehouse

    Burge, Colleen A.; Friedman, Carolyn S.; Getchell, Rodman G.; House, Marcia; Lafferty, Kevin D.; Mydlarz, Laura D.; Prager, Katherine C.; Sutherland, Kathryn P.; Renault, Tristan; Kiryu, Ikunari; Vega-Thurber, Rebecca

    2016-01-01

    Linking marine epizootics to a specific aetiology is notoriously difficult. Recent diagnostic successes show that marine disease diagnosis requires both modern, cutting-edge technology (e.g. metagenomics, quantitative real-time PCR) and more classic methods (e.g. transect surveys, histopathology and cell culture). Here, we discuss how this combination of traditional and modern approaches is necessary for rapid and accurate identification of marine diseases, and emphasize how sole reliance on any one technology or technique may lead disease investigations astray. We present diagnostic approaches at different scales, from the macro (environment, community, population and organismal scales) to the micro (tissue, organ, cell and genomic scales). We use disease case studies from a broad range of taxa to illustrate diagnostic successes from combining traditional and modern diagnostic methods. Finally, we recognize the need for increased capacity of centralized databases, networks, data repositories and contingency plans for diagnosis and management of marine disease.

  9. A real-time KLT implementation for radio-SETI applications

    NASA Astrophysics Data System (ADS)

    Melis, Andrea; Concu, Raimondo; Pari, Pierpaolo; Maccone, Claudio; Montebugnoli, Stelio; Possenti, Andrea; Valente, Giuseppe; Antonietti, Nicoló; Perrodin, Delphine; Migoni, Carlo; Murgia, Matteo; Trois, Alessio; Barbaro, Massimo; Bocchinu, Alessandro; Casu, Silvia; Lunesu, Maria Ilaria; Monari, Jader; Navarrini, Alessandro; Pisanu, Tonino; Schilliró, Francesco; Vacca, Valentina

    2016-07-01

    SETI, the Search for ExtraTerrestrial Intelligence, is the search for radio signals emitted by alien civilizations living in the Galaxy. Narrow-band FFT-based approaches have been preferred in SETI, since their computation time only grows like N*lnN, where N is the number of time samples. On the contrary, a wide-band approach based on the Kahrunen-Lo`eve Transform (KLT) algorithm would be preferable, but it would scale like N*N. In this paper, we describe a hardware-software infrastructure based on FPGA boards and GPU-based PCs that circumvents this computation-time problem allowing for a real-time KLT.

  10. Applying the Pseudo-Panel Approach to International Large-Scale Assessments: A Methodology for Analyzing Subpopulation Trend Data

    ERIC Educational Resources Information Center

    Hooper, Martin

    2017-01-01

    TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…

  11. Regional Environmental Monitoring and Assessment Program Data (REMAP)

    EPA Pesticide Factsheets

    The Regional Environmental Monitoring and Assessment Program (REMAP) was initiated to test the applicability of the Environmental Monitoring and Assessment Program (EMAP) approach to answer questions about ecological conditions at regional and local scales. Using EMAP's statistical design and indicator concepts, REMAP conducts projects at smaller geographic scales and in shorter time frames than the national EMAP program.

  12. Are Madrean ecosystems approaching tipping points? Anticipating interactions of landscape disturbance and climate change

    Treesearch

    Donald A. Falk

    2013-01-01

    Contemporary climate change is driving transitions in many Madrean ecosystems, but the time scale of these changes is accelerated greatly by severe landscape disturbances such as wildfires and insect outbreaks. Landscape-scale disturbance events such as wildfires interact with prior disturbance patterns and landscape structure to catalyze abrupt transitions to novel...

  13. Understanding and responding to earthquake hazards

    NASA Technical Reports Server (NTRS)

    Raymond, C. A.; Lundgren, P. R.; Madsen, S. N.; Rundle, J. B.

    2002-01-01

    Advances in understanding of the earthquake cycle and in assessing earthquake hazards is a topic of great importance. Dynamic earthquake hazard assessments resolved for a range of spatial scales and time scales will allow a more systematic approach to prioritizing the retrofitting of vulnerable structures, relocating populations at risk, protecting lifelines, preparing for disasters, and educating the public.

  14. A conceptual cross-scale approach for linking empirical discharge measurements and regional groundwater models with application to legacy nitrogen transport and coastal nitrogen management

    NASA Astrophysics Data System (ADS)

    Barclay, J. R.; Helton, A. M.; Starn, J. J.; Briggs, M. A.

    2016-12-01

    Despite years of management, seasonal hypoxia from excess nitrogen (N) is a pervasive problem in many coastal waters. Current approaches to managing coastal eutrophication in the United States (USA) focus on surface runoff and river transport of nutrients, and often assume that groundwater N is at steady state. This is not necessarily the case, as terrestrial N inputs are affected by changing land use and nutrient management practices. Furthermore, approximately 70% of surface water in the USA is derived from groundwater and there is widespread N contamination in many of our nation's aquifers. Nitrogen export via groundwater discharge to streams during baseflow may be the reason many impaired coastal systems show little improvement. There is a critical need to develop approaches that consider the effects of groundwater transport on N loading to surface waters. Aquifer transport times, which can be decades or even centuries longer than surface water transport times, introduce lags between changes in terrestrial management and reductions in coastal loads. Ignoring these lags can lead to overly ambitious and unrealistic load reduction goals, or incorrect conclusions regarding the effectiveness of management strategies. Additionally, regional groundwater models typically have a coarse resolution that makes it difficult to incorporate fine-scale processes that drive N transformations, such as groundwater-surface water exchange across steep redox gradients at stream bed interfaces. Despite this challenge, representing these important fine-scale processes well is essential to modeling groundwater transport of N across regional scales and to making informed management decisions. We present 1) a conceptual approach to linking regional models and fine-scale empirical measurements, and 2) preliminary groundwater flow and transport model results for the Housatonic and Farmington Rivers in Connecticut, USA. Our cross-scale approach utilizes thermal infrared imaging and vertical temperature profiling to calculate groundwater discharge and to iteratively refine and downscale the groundwater flow model. Model results may improve management of N loading from groundwater to sensitive coastal systems, such as the Long Island Sound.

  15. Central composite design with the help of multivariate curve resolution in loadability optimization of RP-HPLC to scale-up a binary mixture.

    PubMed

    Taheri, Mohammadreza; Moazeni-Pourasil, Roudabeh Sadat; Sheikh-Olia-Lavasani, Majid; Karami, Ahmad; Ghassempour, Alireza

    2016-03-01

    Chromatographic method development for preparative targets is a time-consuming and subjective process. This can be particularly problematic because of the use of valuable samples for isolation and the large consumption of solvents in preparative scale. These processes could be improved by using statistical computations to save time, solvent and experimental efforts. Thus, contributed by ESI-MS, after applying DryLab software to gain an overview of the most effective parameters in separation of synthesized celecoxib and its co-eluted compounds, design of experiment software that relies on multivariate modeling as a chemometric approach was used to predict the optimized touching-band overloading conditions by objective functions according to the relationship between selectivity and stationary phase properties. The loadability of the method was investigated on the analytical and semi-preparative scales, and the performance of this chemometric approach was approved by peak shapes beside recovery and purity of products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. The Physics of Boiling at Burnout

    NASA Technical Reports Server (NTRS)

    Theofanous, T. G.; Tu, J. P.; Dinh, T. N.; Salmassi, T.; Dinh, A. T.; Gasljevic, K.

    2000-01-01

    The basic elements of a new experimental approach for the investigation of burnout in pool boiling are presented. The approach consists of the combined use of ultrathin (nano-scale) heaters and high speed infrared imaging of the heater temperature pattern as a whole, in conjunction with highly detailed control and characterization of heater morphology at the nano and micron scales. It is shown that the burnout phenomenon can be resolved in both space and time. Ultrathin heaters capable of dissipating power levels, at steady-state, of over 1 MW/square m are demonstrated. A separation of scales is identified and it is used to transfer the focus of attention from the complexity of the two-phase mixing layer in the vicinity of the heater to a micron-scaled microlayer and nucleation and associated film-disruption processes within it.

  17. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  18. Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis

    DOE PAGES

    Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...

    2016-01-28

    We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less

  19. Scaling laws and dynamics of bubble coalescence

    NASA Astrophysics Data System (ADS)

    Anthony, Christopher R.; Kamat, Pritish M.; Thete, Sumeet S.; Munro, James P.; Lister, John R.; Harris, Michael T.; Basaran, Osman A.

    2017-08-01

    The coalescence of bubbles and drops plays a central role in nature and industry. During coalescence, two bubbles or drops touch and merge into one as the neck connecting them grows from microscopic to macroscopic scales. The hydrodynamic singularity that arises when two bubbles or drops have just touched and the flows that ensue have been studied thoroughly when two drops coalesce in a dynamically passive outer fluid. In this paper, the coalescence of two identical and initially spherical bubbles, which are idealized as voids that are surrounded by an incompressible Newtonian liquid, is analyzed by numerical simulation. This problem has recently been studied (a) experimentally using high-speed imaging and (b) by asymptotic analysis in which the dynamics is analyzed by determining the growth of a hole in the thin liquid sheet separating the two bubbles. In the latter, advantage is taken of the fact that the flow in the thin sheet of nonconstant thickness is governed by a set of one-dimensional, radial extensional flow equations. While these studies agree on the power law scaling of the variation of the minimum neck radius with time, they disagree with respect to the numerical value of the prefactors in the scaling laws. In order to reconcile these differences and also provide insights into the dynamics that are difficult to probe by either of the aforementioned approaches, simulations are used to access both earlier times than has been possible in the experiments and also later times when asymptotic analysis is no longer applicable. Early times and extremely small length scales are attained in the new simulations through the use of a truncated domain approach. Furthermore, it is shown by direct numerical simulations in which the flow within the bubbles is also determined along with the flow exterior to them that idealizing the bubbles as passive voids has virtually no effect on the scaling laws relating minimum neck radius and time.

  20. Residents' values and fuels management approaches

    Treesearch

    Gwo-Bao Liou; Christine Vogt; Greg Winter; Sarah McCaffrey

    2008-01-01

    The research utilizes the Forest Value and Salient Value Similarity Scales to examine homeowners' value orientations and relate them to attitudes toward and support for fuels management approaches. Data were collected from homeowners living in the wildland-urban interface of the Huron- Manistee National Forest at two time periods, in 2002 and 2006. The panel data...

  1. Training and Maintaining System-Wide Reliability in Outcome Management.

    PubMed

    Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E

    2014-01-01

    The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.

  2. Large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  3. Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems.

    PubMed

    Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei

    2015-01-01

    The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode.

  4. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations.

    PubMed

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-07

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  5. Overcoming time scale and finite size limitations to compute nucleation rates from small scale well tempered metadynamics simulations

    NASA Astrophysics Data System (ADS)

    Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele

    2016-12-01

    Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.

  6. Assessment of flow regime alterations over a spectrum of temporal scales using wavelet-based approaches

    NASA Astrophysics Data System (ADS)

    Wu, Fu-Chun; Chang, Ching-Fu; Shiau, Jenq-Tzong

    2015-05-01

    The full range of natural flow regime is essential for sustaining the riverine ecosystems and biodiversity, yet there are still limited tools available for assessment of flow regime alterations over a spectrum of temporal scales. Wavelet analysis has proven useful for detecting hydrologic alterations at multiple scales via the wavelet power spectrum (WPS) series. The existing approach based on the global WPS (GWPS) ratio tends to be dominated by the rare high-power flows so that alterations of the more frequent low-power flows are often underrepresented. We devise a new approach based on individual deviations between WPS (DWPS) that are root-mean-squared to yield the global DWPS (GDWPS). We test these two approaches on the three reaches of the Feitsui Reservoir system (Taiwan) that are subjected to different classes of anthropogenic interventions. The GDWPS reveal unique features that are not detected with the GWPS ratios. We also segregate the effects of individual subflow components on the overall flow regime alterations using the subflow GDWPS. The results show that the daily hydropeaking waves below the reservoir not only intensified the flow oscillations at daily scale but most significantly eliminated subweekly flow variability. Alterations of flow regime were most severe below the diversion weir, where the residual hydropeaking resulted in a maximum impact at daily scale while the postdiversion null flows led to large hydrologic alterations over submonthly scales. The smallest impacts below the confluence reveal that the hydrologic alterations at scales longer than 2 days were substantially mitigated with the joining of the unregulated tributary flows, whereas the daily-scale hydrologic alteration was retained because of the hydropeaking inherited from the reservoir releases. The proposed DWPS approach unravels for the first time the details of flow regime alterations at these intermediate scales that are overridden by the low-frequency high-power flows when the long-term averaged GWPS are used.

  7. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  8. Novel fine-scale aerial mapping approach quantifies grassland weed cover dynamics and response to management.

    PubMed

    Malmstrom, Carolyn M; Butterfield, H Scott; Planck, Laura; Long, Christopher W; Eviner, Valerie T

    2017-01-01

    Invasive weeds threaten the biodiversity and forage productivity of grasslands worldwide. However, management of these weeds is constrained by the practical difficulty of detecting small-scale infestations across large landscapes and by limits in understanding of landscape-scale invasion dynamics, including mechanisms that enable patches to expand, contract, or remain stable. While high-end hyperspectral remote sensing systems can effectively map vegetation cover, these systems are currently too costly and limited in availability for most land managers. We demonstrate application of a more accessible and cost-effective remote sensing approach, based on simple aerial imagery, for quantifying weed cover dynamics over time. In California annual grasslands, the target communities of interest include invasive weedy grasses (Aegilops triuncialis and Elymus caput-medusae) and desirable forage grass species (primarily Avena spp. and Bromus spp.). Detecting invasion of annual grasses into an annual-dominated community is particularly challenging, but we were able to consistently characterize these two communities based on their phenological differences in peak growth and senescence using maximum likelihood supervised classification of imagery acquired twice per year (in mid- and end-of season). This approach permitted us to map weed-dominated cover at a 1-m scale (correctly detecting 93% of weed patches across the landscape) and to evaluate weed cover change over time. We found that weed cover was more pervasive and persistent in management units that had no significant grazing for several years than in those that were grazed, whereas forage cover was more abundant and stable in the grazed units. This application demonstrates the power of this method for assessing fine-scale vegetation transitions across heterogeneous landscapes. It thus provides means for small-scale early detection of invasive species and for testing fundamental questions about landscape dynamics.

  9. Novel fine-scale aerial mapping approach quantifies grassland weed cover dynamics and response to management

    PubMed Central

    Butterfield, H. Scott; Planck, Laura; Long, Christopher W.; Eviner, Valerie T.

    2017-01-01

    Invasive weeds threaten the biodiversity and forage productivity of grasslands worldwide. However, management of these weeds is constrained by the practical difficulty of detecting small-scale infestations across large landscapes and by limits in understanding of landscape-scale invasion dynamics, including mechanisms that enable patches to expand, contract, or remain stable. While high-end hyperspectral remote sensing systems can effectively map vegetation cover, these systems are currently too costly and limited in availability for most land managers. We demonstrate application of a more accessible and cost-effective remote sensing approach, based on simple aerial imagery, for quantifying weed cover dynamics over time. In California annual grasslands, the target communities of interest include invasive weedy grasses (Aegilops triuncialis and Elymus caput-medusae) and desirable forage grass species (primarily Avena spp. and Bromus spp.). Detecting invasion of annual grasses into an annual-dominated community is particularly challenging, but we were able to consistently characterize these two communities based on their phenological differences in peak growth and senescence using maximum likelihood supervised classification of imagery acquired twice per year (in mid- and end-of season). This approach permitted us to map weed-dominated cover at a 1-m scale (correctly detecting 93% of weed patches across the landscape) and to evaluate weed cover change over time. We found that weed cover was more pervasive and persistent in management units that had no significant grazing for several years than in those that were grazed, whereas forage cover was more abundant and stable in the grazed units. This application demonstrates the power of this method for assessing fine-scale vegetation transitions across heterogeneous landscapes. It thus provides means for small-scale early detection of invasive species and for testing fundamental questions about landscape dynamics. PMID:29016604

  10. Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence

    NASA Astrophysics Data System (ADS)

    van der Straeten, Erik; Beck, Christian

    2009-09-01

    We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.

  11. A multi-scale approach to monitor urban carbon-dioxide emissions in the atmosphere over Vancouver, Canada

    NASA Astrophysics Data System (ADS)

    Christen, A.; Crawford, B.; Ketler, R.; Lee, J. K.; McKendry, I. G.; Nesic, Z.; Caitlin, S.

    2015-12-01

    Measurements of long-lived greenhouse gases in the urban atmosphere are potentially useful to constrain and validate urban emission inventories, or space-borne remote-sensing products. We summarize and compare three different approaches, operating at different scales, that directly or indirectly identify, attribute and quantify emissions (and uptake) of carbon dioxide (CO2) in urban environments. All three approaches are illustrated using in-situ measurements in the atmosphere in and over Vancouver, Canada. Mobile sensing may be a promising way to quantify and map CO2 mixing ratios at fine scales across heterogenous and complex urban environments. We developed a system for monitoring CO2 mixing ratios at street level using a network of mobile CO2 sensors deployable on vehicles and bikes. A total of 5 prototype sensors were built and simultaneously used in a measurement campaign across a range of urban land use types and densities within a short time frame (3 hours). The dataset is used to aid in fine scale emission mapping in combination with simultaneous tower-based flux measurements. Overall, calculated CO2 emissions are realistic when compared against a spatially disaggregated scale emission inventory. The second approach is based on mass flux measurements of CO2 using a tower-based eddy covariance (EC) system. We present a continuous 7-year long dataset of CO2 fluxes measured by EC at the 28m tall flux tower 'Vancouver-Sunset'. We show how this dataset can be combined with turbulent source area models to quantify and partition different emission processes at the neighborhood-scale. The long-term EC measurements are within 10% of a spatially disaggregated scale emission inventory. Thirdly, at the urban scale, we present a dataset of CO2 mixing ratios measured using a tethered balloon system in the urban boundary layer above Vancouver. Using a simple box model, net city-scale CO2 emissions can be determined using measured rate of change of CO2 mixing ratios, estimated CO2 advection and entrainment fluxes. Daily city-scale emissions totals predicted by the model are within 32% of a spatially scaled municipal greenhouse gas inventory. In summary, combining information from different approaches and scales is a promising approach to establish long-term emission monitoring networks in cities.

  12. Large-Scale Bi-Level Strain Design Approaches and Mixed-Integer Programming Solution Techniques

    PubMed Central

    Kim, Joonhoon; Reed, Jennifer L.; Maravelias, Christos T.

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering. PMID:21949695

  13. Large-scale bi-level strain design approaches and mixed-integer programming solution techniques.

    PubMed

    Kim, Joonhoon; Reed, Jennifer L; Maravelias, Christos T

    2011-01-01

    The use of computational models in metabolic engineering has been increasing as more genome-scale metabolic models and computational approaches become available. Various computational approaches have been developed to predict how genetic perturbations affect metabolic behavior at a systems level, and have been successfully used to engineer microbial strains with improved primary or secondary metabolite production. However, identification of metabolic engineering strategies involving a large number of perturbations is currently limited by computational resources due to the size of genome-scale models and the combinatorial nature of the problem. In this study, we present (i) two new bi-level strain design approaches using mixed-integer programming (MIP), and (ii) general solution techniques that improve the performance of MIP-based bi-level approaches. The first approach (SimOptStrain) simultaneously considers gene deletion and non-native reaction addition, while the second approach (BiMOMA) uses minimization of metabolic adjustment to predict knockout behavior in a MIP-based bi-level problem for the first time. Our general MIP solution techniques significantly reduced the CPU times needed to find optimal strategies when applied to an existing strain design approach (OptORF) (e.g., from ∼10 days to ∼5 minutes for metabolic engineering strategies with 4 gene deletions), and identified strategies for producing compounds where previous studies could not (e.g., malate and serine). Additionally, we found novel strategies using SimOptStrain with higher predicted production levels (for succinate and glycerol) than could have been found using an existing approach that considers network additions and deletions in sequential steps rather than simultaneously. Finally, using BiMOMA we found novel strategies involving large numbers of modifications (for pyruvate and glutamate), which sequential search and genetic algorithms were unable to find. The approaches and solution techniques developed here will facilitate the strain design process and extend the scope of its application to metabolic engineering.

  14. Bias correction of satellite-based rainfall data

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Biswa; Solomatine, Dimitri

    2015-04-01

    Limitation in hydro-meteorological data availability in many catchments limits the possibility of reliable hydrological analyses especially for near-real-time predictions. However, the variety of satellite based and meteorological model products for rainfall provides new opportunities. Often times the accuracy of these rainfall products, when compared to rain gauge measurements, is not impressive. The systematic differences of these rainfall products from gauge observations can be partially compensated by adopting a bias (error) correction. Many of such methods correct the satellite based rainfall data by comparing their mean value to the mean value of rain gauge data. Refined approaches may also first find out a suitable time scale at which different data products are better comparable and then employ a bias correction at that time scale. More elegant methods use quantile-to-quantile bias correction, which however, assumes that the available (often limited) sample size can be useful in comparing probabilities of different rainfall products. Analysis of rainfall data and understanding of the process of its generation reveals that the bias in different rainfall data varies in space and time. The time aspect is sometimes taken into account by considering the seasonality. In this research we have adopted a bias correction approach that takes into account the variation of rainfall in space and time. A clustering based approach is employed in which every new data point (e.g. of Tropical Rainfall Measuring Mission (TRMM)) is first assigned to a specific cluster of that data product and then, by identifying the corresponding cluster of gauge data, the bias correction specific to that cluster is adopted. The presented approach considers the space-time variation of rainfall and as a result the corrected data is more realistic. Keywords: bias correction, rainfall, TRMM, satellite rainfall

  15. The Schaake shuffle: A method for reconstructing space-time variability in forecasted precipitation and temperature fields

    USGS Publications Warehouse

    Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.

    2004-01-01

    A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.

  16. Dynamic ocean management increases the efficiency and efficacy of fisheries management.

    PubMed

    Dunn, Daniel C; Maxwell, Sara M; Boustany, Andre M; Halpin, Patrick N

    2016-01-19

    In response to the inherent dynamic nature of the oceans and continuing difficulty in managing ecosystem impacts of fisheries, interest in the concept of dynamic ocean management, or real-time management of ocean resources, has accelerated in the last several years. However, scientists have yet to quantitatively assess the efficiency of dynamic management over static management. Of particular interest is how scale influences effectiveness, both in terms of how it reflects underlying ecological processes and how this relates to potential efficiency gains. Here, we address the empirical evidence gap and further the ecological theory underpinning dynamic management. We illustrate, through the simulation of closures across a range of spatiotemporal scales, that dynamic ocean management can address previously intractable problems at scales associated with coactive and social patterns (e.g., competition, predation, niche partitioning, parasitism, and social aggregations). Furthermore, it can significantly improve the efficiency of management: as the resolution of the closures used increases (i.e., as the closures become more targeted), the percentage of target catch forgone or displaced decreases, the reduction ratio (bycatch/catch) increases, and the total time-area required to achieve the desired bycatch reduction decreases. In the scenario examined, coarser scale management measures (annual time-area closures and monthly full-fishery closures) would displace up to four to five times the target catch and require 100-200 times more square kilometer-days of closure than dynamic measures (grid-based closures and move-on rules). To achieve similar reductions in juvenile bycatch, the fishery would forgo or displace between USD 15-52 million in landings using a static approach over a dynamic management approach.

  17. Scaling forecast models for wind turbulence and wind turbine power intermittency

    NASA Astrophysics Data System (ADS)

    Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy

    2017-04-01

    The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.

  18. SWARM : a scientific workflow for supporting Bayesian approaches to improve metabolic models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, X.; Stevens, R.; Mathematics and Computer Science

    2008-01-01

    With the exponential growth of complete genome sequences, the analysis of these sequences is becoming a powerful approach to build genome-scale metabolic models. These models can be used to study individual molecular components and their relationships, and eventually study cells as systems. However, constructing genome-scale metabolic models manually is time-consuming and labor-intensive. This property of manual model-building process causes the fact that much fewer genome-scale metabolic models are available comparing to hundreds of genome sequences available. To tackle this problem, we design SWARM, a scientific workflow that can be utilized to improve genome-scale metabolic models in high-throughput fashion. SWARM dealsmore » with a range of issues including the integration of data across distributed resources, data format conversions, data update, and data provenance. Putting altogether, SWARM streamlines the whole modeling process that includes extracting data from various resources, deriving training datasets to train a set of predictors and applying Bayesian techniques to assemble the predictors, inferring on the ensemble of predictors to insert missing data, and eventually improving draft metabolic networks automatically. By the enhancement of metabolic model construction, SWARM enables scientists to generate many genome-scale metabolic models within a short period of time and with less effort.« less

  19. Galaxy Zoo: evidence for diverse star formation histories through the green valley

    NASA Astrophysics Data System (ADS)

    Smethurst, R. J.; Lintott, C. J.; Simmons, B. D.; Schawinski, K.; Marshall, P. J.; Bamford, S.; Fortson, L.; Kaviraj, S.; Masters, K. L.; Melvin, T.; Nichol, R. C.; Skibba, R. A.; Willett, K. W.

    2015-06-01

    Does galaxy evolution proceed through the green valley via multiple pathways or as a single population? Motivated by recent results highlighting radically different evolutionary pathways between early- and late-type galaxies, we present results from a simple Bayesian approach to this problem wherein we model the star formation history (SFH) of a galaxy with two parameters, [t, τ] and compare the predicted and observed optical and near-ultraviolet colours. We use a novel method to investigate the morphological differences between the most probable SFHs for both disc-like and smooth-like populations of galaxies, by using a sample of 126 316 galaxies (0.01 < z < 0.25) with probabilistic estimates of morphology from Galaxy Zoo. We find a clear difference between the quenching time-scales preferred by smooth- and disc-like galaxies, with three possible routes through the green valley dominated by smooth- (rapid time-scales, attributed to major mergers), intermediate- (intermediate time-scales, attributed to minor mergers and galaxy interactions) and disc-like (slow time-scales, attributed to secular evolution) galaxies. We hypothesize that morphological changes occur in systems which have undergone quenching with an exponential time-scale τ < 1.5 Gyr, in order for the evolution of galaxies in the green valley to match the ratio of smooth to disc galaxies observed in the red sequence. These rapid time-scales are instrumental in the formation of the red sequence at earlier times; however, we find that galaxies currently passing through the green valley typically do so at intermediate time-scales.†

  20. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  1. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    NASA Astrophysics Data System (ADS)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-10-01

    We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.

  2. Exposure Time Distributions reveal Denitrification Rates along Groundwater Flow Path of an Agricultural Unconfined Aquifer

    NASA Astrophysics Data System (ADS)

    Kolbe, T.; Abbott, B. W.; Thomas, Z.; Labasque, T.; Aquilina, L.; Laverman, A.; Babey, T.; Marçais, J.; Fleckenstein, J. H.; Peiffer, S.; De Dreuzy, J. R.; Pinay, G.

    2016-12-01

    Groundwater contamination by nitrate is nearly ubiquitous in agricultural regions. Nitrate is highly mobile in groundwater and though it can be denitrified in the aquifer (reduced to inert N2 gas), this process requires the simultaneous occurrence of anoxia, an electron donor (e.g. organic carbon, pyrite), nitrate, and microorganisms capable of denitrification. In addition to this the ratio of the time groundwater spent in a denitrifying environment (exposure time) to the characteristic denitrification reaction time plays an important role, because denitrification can only occur if the exposure time is longer than the characteristic reaction time. Despite a long history of field studies and numerical models, it remains exceedingly difficult to measure or model exposure times in the subsurface at the catchment scale. To approach this problem, we developed a unified modelling approach combining measured environmental proxies with an exposure time based reactive transport model. We measured groundwater age, nitrogen and sulfur isotopes, and water chemistry from agricultural wells in an unconfined aquifer in Brittany, France, to quantify changes in nitrate concentration due to dilution and denitrification. Field data showed large differences in nitrate concentrations among wells, associated with differences in the exposure time distributions. By constraining a catchment-scale characteristic reaction time for denitrification with water chemistry proxies and exposure times, we were able to assess rates of denitrification along groundwater flow paths. This unified modeling approach is transferable to other catchments and could be further used to investigate how catchment structure and flow dynamics interact with biogeochemical processes such as denitrification.

  3. Nanomedical science and laser-driven particle acceleration: promising approaches in the prethermal regime

    NASA Astrophysics Data System (ADS)

    Gauduel, Y. A.

    2017-05-01

    A major challenge of spatio-temporal radiation biomedicine concerns the understanding of biophysical events triggered by an initial energy deposition inside confined ionization tracks. This contribution deals with an interdisciplinary approach that concerns cutting-edge advances in real-time radiation events, considering the potentialities of innovating strategies based on ultrafast laser science, from femtosecond photon sources to advanced techniques of ultrafast TW laser-plasma accelerator. Recent advances of powerful TW laser sources ( 1019 W cm-2) and laser-plasma interactions providing ultra-short relativistic particle beams in the energy domain 5-200 MeV open promising opportunities for the development of high energy radiation femtochemistry (HERF) in the prethermal regime of secondary low-energy electrons and for the real-time imaging of radiation-induced biomolecular alterations at the nanoscopic scale. New developments would permit to correlate early radiation events triggered by ultrashort radiation sources with a molecular approach of Relative Biological Effectiveness (RBE). These emerging research developments are crucial to understand simultaneously, at the sub-picosecond and nanometric scales, the early consequences of ultra-short-pulsed radiation on biomolecular environments or integrated biological entities. This innovating approach would be applied to biomedical relevant concepts such as the emerging domain of real-time nanodosimetry for targeted pro-drug activation and pulsed radio-chimiotherapy of cancers.

  4. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion.

    PubMed

    Gautestad, Arild O

    2012-09-07

    Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the 'power law in disguise' paradox-from a composite Brownian motion consisting of a superposition of independent movement processes at different scales-may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated.

  5. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  6. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  7. Collective motion of macroscopic spheres floating on capillary ripples: Dynamic heterogeneity and dynamic criticality

    NASA Astrophysics Data System (ADS)

    Sanlı, Ceyda; Saitoh, Kuniyasu; Luding, Stefan; van der Meer, Devaraj

    2014-09-01

    When a densely packed monolayer of macroscopic spheres floats on chaotic capillary Faraday waves, a coexistence of large scale convective motion and caging dynamics typical for glassy systems is observed. We subtract the convective mean flow using a coarse graining (homogenization) method and reveal subdiffusion for the caging time scales followed by a diffusive regime at later times. We apply the methods developed to study dynamic heterogeneity and show that the typical time and length scales of the fluctuations due to rearrangements of observed particle groups significantly increase when the system approaches its largest experimentally accessible packing concentration. To connect the system to the dynamic criticality literature, we fit power laws to our results. The resultant critical exponents are consistent with those found in densely packed suspensions of colloids.

  8. Collective motion of macroscopic spheres floating on capillary ripples: dynamic heterogeneity and dynamic criticality.

    PubMed

    Sanlı, Ceyda; Saitoh, Kuniyasu; Luding, Stefan; van der Meer, Devaraj

    2014-09-01

    When a densely packed monolayer of macroscopic spheres floats on chaotic capillary Faraday waves, a coexistence of large scale convective motion and caging dynamics typical for glassy systems is observed. We subtract the convective mean flow using a coarse graining (homogenization) method and reveal subdiffusion for the caging time scales followed by a diffusive regime at later times. We apply the methods developed to study dynamic heterogeneity and show that the typical time and length scales of the fluctuations due to rearrangements of observed particle groups significantly increase when the system approaches its largest experimentally accessible packing concentration. To connect the system to the dynamic criticality literature, we fit power laws to our results. The resultant critical exponents are consistent with those found in densely packed suspensions of colloids.

  9. Spectral analysis of temporal non-stationary rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2018-04-01

    This study treats the catchment as a block box system with considering the rainfall input and runoff output being a stochastic process. The temporal rainfall-runoff relationship at the catchment scale is described by a convolution integral on a continuous time scale. Using the Fourier-Stieltjes representation approach, a frequency domain solution to the convolution integral is developed to the spectral analysis of runoff processes generated by temporal non-stationary rainfall events. It is shown that the characteristic time scale of rainfall process increases the runoff discharge variability, while the catchment mean travel time constant plays the role in reducing the variability of runoff discharge. Similar to the behavior of groundwater aquifers, catchments act as a low-pass filter in the frequency domain for the rainfall input signal.

  10. Bi-scale analysis of multitemporal land cover fractions for wetland vegetation mapping

    NASA Astrophysics Data System (ADS)

    Michishita, Ryo; Jiang, Zhiben; Gong, Peng; Xu, Bing

    2012-08-01

    Land cover fractions (LCFs) derived through spectral mixture analysis are useful in understanding sub-pixel information. However, few studies have been conducted on the analysis of time-series LCFs. Although multi-scale comparisons of spectral index, hard classification, and land surface temperature images have received attention, rarely have these approaches been applied to LCFs. This study compared the LCFs derived through Multiple Endmember Spectral Mixture Analysis (MESMA) using the time-series Landsat Thematic Mapper (TM) and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) data acquired in the Poyang Lake area, China between 2004 and 2005. Specifically, we aimed to: (1) propose an approach for optimal endmember (EM) selection in time-series MESMA; (2) understand the trends in time-series LCFs derived from the TM and MODIS data; and (3) examine the trends in the correlation between the bi-scale LCFs derived from the time-series TM and MODIS data. Our results indicated: (1) the EM spectra chosen according to the proposed hierarchical three-step approach (overall, seasonal, and individual) accurately modeled the both the TM and MODIS images; (2) green vegetation (GV) and NPV/soil/impervious surface (N/S/I) classes followed sine curve trends in the overall area, while the two water classes displayed the water level change pattern in the areas primarily covered with wetland vegetation; and (3) GV, N/S/I, and bright water classes indicated a moderately high agreement between the TM and MODIS LCFs in the whole area (adjusted R2 ⩾ 0.6). However, low levels of correlations were found in the areas primarily dominated by wetland vegetation for all land cover classes.

  11. Amp: A modular approach to machine learning in atomistic simulations

    NASA Astrophysics Data System (ADS)

    Khorshidi, Alireza; Peterson, Andrew A.

    2016-10-01

    Electronic structure calculations, such as those employing Kohn-Sham density functional theory or ab initio wavefunction theories, have allowed for atomistic-level understandings of a wide variety of phenomena and properties of matter at small scales. However, the computational cost of electronic structure methods drastically increases with length and time scales, which makes these methods difficult for long time-scale molecular dynamics simulations or large-sized systems. Machine-learning techniques can provide accurate potentials that can match the quality of electronic structure calculations, provided sufficient training data. These potentials can then be used to rapidly simulate large and long time-scale phenomena at similar quality to the parent electronic structure approach. Machine-learning potentials usually take a bias-free mathematical form and can be readily developed for a wide variety of systems. Electronic structure calculations have favorable properties-namely that they are noiseless and targeted training data can be produced on-demand-that make them particularly well-suited for machine learning. This paper discusses our modular approach to atomistic machine learning through the development of the open-source Atomistic Machine-learning Package (Amp), which allows for representations of both the total and atom-centered potential energy surface, in both periodic and non-periodic systems. Potentials developed through the atom-centered approach are simultaneously applicable for systems with various sizes. Interpolation can be enhanced by introducing custom descriptors of the local environment. We demonstrate this in the current work for Gaussian-type, bispectrum, and Zernike-type descriptors. Amp has an intuitive and modular structure with an interface through the python scripting language yet has parallelizable fortran components for demanding tasks; it is designed to integrate closely with the widely used Atomic Simulation Environment (ASE), which makes it compatible with a wide variety of commercial and open-source electronic structure codes. We finally demonstrate that the neural network model inside Amp can accurately interpolate electronic structure energies as well as forces of thousands of multi-species atomic systems.

  12. The fractal-multifractal method and temporal resolution: Application to precipitation and streamflow

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Puente, C. E.; Sivakumar, B.

    2017-12-01

    In the past, we have established that the deterministic fractal-multifractal (FM) method is a promising geometric tool to analyze hydro-climatic variables, such as precipitation, river flow, and temperature. In this study, we address the issue of temporal resolution to advance the suitability and usefulness of the FM approach in hydro-climate. Specifically, we elucidate the evolution of FM geometric parameters as computed at different time scales ranging from a day to a month (30-day) in increments of a day. For this purpose, both rainfall and river discharge records at Sacramento, California gathered over a year are encoded at different time scales. The analysis reveals that: (a) the FM approach yields faithful encodings of both kinds of data sets at the resolutions considered with reasonably small errors; and (b) the "best" FM parameters ultimately converge when the resolution is increased, thus allowing visualizing both hydrologic attributes. By addressing the scalability of the geometric patterns, these results further advance the suitability of the FM approach.

  13. Scaling properties of foreign exchange volatility

    NASA Astrophysics Data System (ADS)

    Gençay, Ramazan; Selçuk, Faruk; Whitcher, Brandon

    2001-01-01

    In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete wavelet transformation. It is shown that foreign exchange rate volatilities follow different scaling laws at different horizons. Particularly, there is a smaller degree of persistence in intra-day volatility as compared to volatility at one day and higher scales. Therefore, a common practice in the risk management industry to convert risk measures calculated at shorter horizons into longer horizons through a global scaling parameter may not be appropriate. This paper also demonstrates that correlation between the foreign exchange volatilities is the lowest at the intra-day scales but exhibits a gradual increase up to a daily scale. The correlation coefficient stabilizes at scales one day and higher. Therefore, the benefit of currency diversification is the greatest at the intra-day scales and diminishes gradually at higher scales (lower frequencies). The wavelet cross-correlation analysis also indicates that the association between two volatilities is stronger at lower frequencies.

  14. A derivation and scalable implementation of the synchronous parallel kinetic Monte Carlo method for simulating long-time dynamics

    NASA Astrophysics Data System (ADS)

    Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2017-10-01

    Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.

  15. Factors Influencing the Sahelian Paradox at the Local Watershed Scale: Causal Inference Insights

    NASA Astrophysics Data System (ADS)

    Van Gordon, M.; Groenke, A.; Larsen, L.

    2017-12-01

    While the existence of paradoxical rainfall-runoff and rainfall-groundwater correlations are well established in the West African Sahel, the hydrologic mechanisms involved are poorly understood. In pursuit of mechanistic explanations, we perform a causal inference analysis on hydrologic variables in three watersheds in Benin and Niger. Using an ensemble of techniques, we compute the strength of relationships between observational soil moisture, runoff, precipitation, and temperature data at seasonal and event timescales. Performing analysis over a range of time lags allows dominant time scales to emerge from the relationships between variables. By determining the time scales of hydrologic connectivity over vertical and lateral space, we show differences in the importance of overland and subsurface flow over the course of the rainy season and between watersheds. While previous work on the paradoxical hydrologic behavior in the Sahel focuses on surface processes and infiltration, our results point toward the importance of subsurface flow to rainfall-runoff relationships in these watersheds. The hypotheses generated from our ensemble approach suggest that subsequent explorations of mechanistic hydrologic processes in the region include subsurface flow. Further, this work highlights how an ensemble approach to causal analysis can reveal nuanced relationships between variables even in poorly understood hydrologic systems.

  16. From Single-Cell Dynamics to Scaling Laws in Oncology

    NASA Astrophysics Data System (ADS)

    Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo

    We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.

  17. Assessing Sustainability When Data Availability Limits Real-Time Estimates: Using Near-Time Indicators to Extend Sustainability Metrics

    EPA Science Inventory

    We produced a scientifically defensible methodology to assess whether a regional system is on a sustainable path. The approach required readily available data, metrics applicable to the relevant scale, and results useful to decision makers. We initiated a pilot project to test ...

  18. Bridging Empirical and Physical Approaches for Landslide Monitoring and Early Warning

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Kumar, Sujay; Harrison, Ken

    2011-01-01

    Rainfall-triggered landslides typically occur and are evaluated at local scales, using slope-stability models to calculate coincident changes in driving and resisting forces at the hillslope level in order to anticipate slope failures. Over larger areas, detailed high resolution landslide modeling is often infeasible due to difficulties in quantifying the complex interaction between rainfall infiltration and surface materials as well as the dearth of available in situ soil and rainfall estimates and accurate landslide validation data. This presentation will discuss how satellite precipitation and surface information can be applied within a landslide hazard assessment framework to improve landslide monitoring and early warning by considering two disparate approaches to landslide hazard assessment: an empirical landslide forecasting algorithm and a physical slope-stability model. The goal of this research is to advance near real-time landslide hazard assessment and early warning at larger spatial scales. This is done by employing high resolution surface and precipitation information within a probabilistic framework to provide more physically-based grounding to empirical landslide triggering thresholds. The empirical landslide forecasting tool, running in near real-time at http://trmm.nasa.gov, considers potential landslide activity at the global scale and relies on Tropical Rainfall Measuring Mission (TRMM) precipitation data and surface products to provide a near real-time picture of where landslides may be triggered. The physical approach considers how rainfall infiltration on a hillslope affects the in situ hydro-mechanical processes that may lead to slope failure. Evaluation of these empirical and physical approaches are performed within the Land Information System (LIS), a high performance land surface model processing and data assimilation system developed within the Hydrological Sciences Branch at NASA's Goddard Space Flight Center. LIS provides the capabilities to quantify uncertainty from model inputs and calculate probabilistic estimates for slope failures. Results indicate that remote sensing data can provide many of the spatiotemporal requirements for accurate landslide monitoring and early warning; however, higher resolution precipitation inputs will help to better identify small-scale precipitation forcings that contribute to significant landslide triggering. Future missions, such as the Global Precipitation Measurement (GPM) mission will provide more frequent and extensive estimates of precipitation at the global scale, which will serve as key inputs to significantly advance the accuracy of landslide hazard assessment, particularly over larger spatial scales.

  19. Gyrokinetic theory for particle and energy transport in fusion plasmas

    NASA Astrophysics Data System (ADS)

    Falessi, Matteo Valerio; Zonca, Fulvio

    2018-03-01

    A set of equations is derived describing the macroscopic transport of particles and energy in a thermonuclear plasma on the energy confinement time. The equations thus derived allow studying collisional and turbulent transport self-consistently, retaining the effect of magnetic field geometry without postulating any scale separation between the reference state and fluctuations. Previously, assuming scale separation, transport equations have been derived from kinetic equations by means of multiple-scale perturbation analysis and spatio-temporal averaging. In this work, the evolution equations for the moments of the distribution function are obtained following the standard approach; meanwhile, gyrokinetic theory has been used to explicitly express the fluctuation induced fluxes. In this way, equations for the transport of particles and energy up to the transport time scale can be derived using standard first order gyrokinetics.

  20. Multiscale recurrence analysis of spatio-temporal data

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Marwan, N.; Kurths, J.

    2015-12-01

    The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.

  1. Multiscale recurrence analysis of spatio-temporal data.

    PubMed

    Riedl, M; Marwan, N; Kurths, J

    2015-12-01

    The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.

  2. Evaluating complementary networks of restoration plantings for landscape-scale occurrence of temporally dynamic species.

    PubMed

    Ikin, Karen; Tulloch, Ayesha; Gibbons, Philip; Ansell, Dean; Seddon, Julian; Lindenmayer, David

    2016-10-01

    Multibillion dollar investments in land restoration make it critical that conservation goals are achieved cost-effectively. Approaches developed for systematic conservation planning offer opportunities to evaluate landscape-scale, temporally dynamic biodiversity outcomes from restoration and improve on traditional approaches that focus on the most species-rich plantings. We investigated whether it is possible to apply a complementarity-based approach to evaluate the extent to which an existing network of restoration plantings meets representation targets. Using a case study of woodland birds of conservation concern in southeastern Australia, we compared complementarity-based selections of plantings based on temporally dynamic species occurrences with selections based on static species occurrences and selections based on ranking plantings by species richness. The dynamic complementarity approach, which incorporated species occurrences over 5 years, resulted in higher species occurrences and proportion of targets met compared with the static complementarity approach, in which species occurrences were taken at a single point in time. For equivalent cost, the dynamic complementarity approach also always resulted in higher average minimum percent occurrence of species maintained through time and a higher proportion of the bird community meeting representation targets compared with the species-richness approach. Plantings selected under the complementarity approaches represented the full range of planting attributes, whereas those selected under the species-richness approach were larger in size. Our results suggest that future restoration policy should not attempt to achieve all conservation goals within individual plantings, but should instead capitalize on restoration opportunities as they arise to achieve collective value of multiple plantings across the landscape. Networks of restoration plantings with complementary attributes of age, size, vegetation structure, and landscape context lead to considerably better outcomes than conventional restoration objectives of site-scale species richness and are crucial for allocating restoration investment wisely to reach desired conservation goals. © 2016 Society for Conservation Biology.

  3. COREST: A FORTRAN computer program to analyze paralinear oxidation behavior and its application to chromic oxide forming alloys

    NASA Technical Reports Server (NTRS)

    Barrett, C. E.; Presler, A. F.

    1976-01-01

    A FORTRAN computer program (COREST) was developed to analyze the high-temperature paralinear oxidation behavior of metals. It is based on a mass-balance approach and uses typical gravimetric input data. COREST was applied to predominantly Cr2O3-forming alloys tested isothermally for long times. These alloys behaved paralinearly above 1100 C as a result of simultaneous scale formation and scale vaporization. Output includes the pertinent formation and vaporization constants and kinetic values of interest. COREST also estimates specific sample weight and specific scale weight as a function of time. Most importantly, from a corrosion standpoint, it estimates specific metal loss.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  5. Scaling prospects in mechanical energy harvesting with piezo nanowires

    NASA Astrophysics Data System (ADS)

    Ardila, Gustavo; Hinchet, Ronan; Mouis, Mireille; Montès, Laurent

    2013-07-01

    The combination of 3D processing technologies, low power circuits and new materials integration makes it conceivable to build autonomous integrated systems, which would harvest their energy from the environment. In this paper, we focus on mechanical energy harvesting and discuss its scaling prospects toward the use of piezoelectric nanostructures, able to be integrated in a CMOS environment. It is shown that direct scaling of present MEMS-based methodologies would be beneficial for high-frequency applications only. For the range of applications which is presently foreseen, a different approach is needed, based on energy harvesting from direct real-time deformation instead of energy harvesting from vibration modes at or close to resonance. We discuss the prospects of such an approach based on simple scaling rules Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.

  6. The sensitivity of the atmospheric branch of the global water cycle to temperature fluctuations at synoptic to decadal time-scales in different satellite- and model-based products

    NASA Astrophysics Data System (ADS)

    Nogueira, Miguel

    2018-02-01

    Spectral analysis of global-mean precipitation, P, evaporation, E, precipitable water, W, and surface temperature, Ts, revealed significant variability from sub-daily to multi-decadal time-scales, superposed on high-amplitude diurnal and yearly peaks. Two distinct regimes emerged from a transition in the spectral exponents, β. The weather regime covering time-scales < 10 days with β ≥ 1; and the macroweather regime extending from a few months to a few decades with 0 <β <1. Additionally, the spectra showed a generally good statistical agreement amongst several different model- and satellite-based datasets. Detrended cross-correlation analysis (DCCA) revealed three important results which are robust across all datasets: (1) Clausius-Clapeyron (C-C) relationship is the dominant mechanism of W non-periodic variability at multi-year time-scales; (2) C-C is not the dominant control of W, P or E non-periodic variability at time-scales below about 6 months, where the weather regime is approached and other mechanisms become important; (3) C-C is not a dominant control for P or E over land throughout the entire time-scale range considered. Furthermore, it is suggested that the atmosphere and oceans start to act as a single coupled system at time-scales > 1-2 years, while at time-scales < 6 months they are not the dominant drivers of each other. For global-ocean and full-globe averages, ρDCCA showed large spread of the C-C importance for P and E variability amongst different datasets at multi-year time-scales, ranging from negligible (< 0.3) to high ( 0.6-0.8) values. Hence, state-of-the-art climate datasets have significant uncertainties in the representation of macroweather precipitation and evaporation variability and its governing mechanisms.

  7. Scale and the representation of human agency in the modeling of agroecosystems

    DOE PAGES

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...

    2015-07-17

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  8. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  9. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  10. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  11. Multi-scale computational modeling of developmental biology.

    PubMed

    Setty, Yaki

    2012-08-01

    Normal development of multicellular organisms is regulated by a highly complex process in which a set of precursor cells proliferate, differentiate and move, forming over time a functioning tissue. To handle their complexity, developmental systems can be studied over distinct scales. The dynamics of each scale is determined by the collective activity of entities at the scale below it. I describe a multi-scale computational approach for modeling developmental systems and detail the methodology through a synthetic example of a developmental system that retains key features of real developmental systems. I discuss the simulation of the system as it emerges from cross-scale and intra-scale interactions and describe how an in silico study can be carried out by modifying these interactions in a way that mimics in vivo experiments. I highlight biological features of the results through a comparison with findings in Caenorhabditis elegans germline development and finally discuss about the applications of the approach in real developmental systems and propose future extensions. The source code of the model of the synthetic developmental system can be found in www.wisdom.weizmann.ac.il/~yaki/MultiScaleModel. yaki.setty@gmail.com Supplementary data are available at Bioinformatics online.

  12. Heterogeneous dynamics of ionic liquids: A four-point time correlation function approach

    NASA Astrophysics Data System (ADS)

    Liu, Jiannan; Willcox, Jon A. L.; Kim, Hyung J.

    2018-05-01

    Many ionic liquids show behavior similar to that of glassy systems, e.g., large and long-lasted deviations from Gaussian dynamics and clustering of "mobile" and "immobile" groups of ions. Herein a time-dependent four-point density correlation function—typically used to characterize glassy systems—is implemented for the ionic liquids, choline acetate, and 1-butyl-3-methylimidazolium acetate. Dynamic correlation beyond the first ionic solvation shell on the time scale of nanoseconds is found in the ionic liquids, revealing the cooperative nature of ion motions. The traditional solvent, acetonitrile, on the other hand, shows a much shorter length-scale that decays after a few picoseconds.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  14. Management applications of discontinuity theory | Science ...

    EPA Pesticide Factsheets

    1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems. 2.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity. 3.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation This manuscript provides insight on using discontinuity approaches to aid in managing complex ecological systems. In part

  15. Management applications of discontinuity theory

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance H.; Knutson, Melinda; Nash, Kirsty L.; Nelson, R. John; Nystrom, Magnus; Spanbauer, Trisha; Stow, Craig A.; Sundstrom, Shana M.

    2015-01-01

    Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation.

  16. Scaling up paediatric HIV care with an integrated, family-centred approach: an observational case study from Uganda.

    PubMed

    Luyirika, Emmanuel; Towle, Megan S; Achan, Joyce; Muhangi, Justus; Senyimba, Catherine; Lule, Frank; Muhe, Lulu

    2013-01-01

    Family-centred HIV care models have emerged as an approach to better target children and their caregivers for HIV testing and care, and further provide integrated health services for the family unit's range of care needs. While there is significant international interest in family-centred approaches, there is a dearth of research on operational experiences in implementation and scale-up. Our retrospective case study examined best practices and enabling factors during scale-up of family-centred care in ten health facilities and ten community clinics supported by a non-governmental organization, Mildmay, in Central Uganda. Methods included key informant interviews with programme management and families, and a desk review of hospital management information systems (HMIS) uptake data. In the 84 months following the scale-up of the family-centred approach in HIV care, Mildmay experienced a 50-fold increase of family units registered in HIV care, a 40-fold increase of children enrolled in HIV care, and nearly universal coverage of paediatric cotrimoxazole prophylaxis. The Mildmay experience emphasizes the importance of streamlining care to maximize paediatric capture. This includes integrated service provision, incentivizing care-seeking as a family, creating child-friendly service environments, and minimizing missed paediatric testing opportunities by institutionalizing early infant diagnosis and provider-initiated testing and counselling. Task-shifting towards nurse-led clinics with community outreach support enabled rapid scale-up, as did an active management structure that allowed for real-time review and corrective action. The Mildmay experience suggests that family-centred approaches are operationally feasible, produce strong coverage outcomes, and can be well-managed during rapid scale-up.

  17. Decoding the spatial signatures of multi-scale climate variability - a climate network perspective

    NASA Astrophysics Data System (ADS)

    Donner, R. V.; Jajcay, N.; Wiedermann, M.; Ekhtiari, N.; Palus, M.

    2017-12-01

    During the last years, the application of complex networks as a versatile tool for analyzing complex spatio-temporal data has gained increasing interest. Establishing this approach as a new paradigm in climatology has already provided valuable insights into key spatio-temporal climate variability patterns across scales, including novel perspectives on the dynamics of the El Nino Southern Oscillation or the emergence of extreme precipitation patterns in monsoonal regions. In this work, we report first attempts to employ network analysis for disentangling multi-scale climate variability. Specifically, we introduce the concept of scale-specific climate networks, which comprises a sequence of networks representing the statistical association structure between variations at distinct time scales. For this purpose, we consider global surface air temperature reanalysis data and subject the corresponding time series at each grid point to a complex-valued continuous wavelet transform. From this time-scale decomposition, we obtain three types of signals per grid point and scale - amplitude, phase and reconstructed signal, the statistical similarity of which is then represented by three complex networks associated with each scale. We provide a detailed analysis of the resulting connectivity patterns reflecting the spatial organization of climate variability at each chosen time-scale. Global network characteristics like transitivity or network entropy are shown to provide a new view on the (global average) relevance of different time scales in climate dynamics. Beyond expected trends originating from the increasing smoothness of fluctuations at longer scales, network-based statistics reveal different degrees of fragmentation of spatial co-variability patterns at different scales and zonal shifts among the key players of climate variability from tropically to extra-tropically dominated patterns when moving from inter-annual to decadal scales and beyond. The obtained results demonstrate the potential usefulness of systematically exploiting scale-specific climate networks, whose general patterns are in line with existing climatological knowledge, but provide vast opportunities for further quantifications at local, regional and global scales that are yet to be explored.

  18. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  19. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  20. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  1. Quantification of pathogen inactivation efficacy by free chlorine disinfection of drinking water for QMRA.

    PubMed

    Petterson, S R; Stenström, T A

    2015-09-01

    To support the implementation of quantitative microbial risk assessment (QMRA) for managing infectious risks associated with drinking water systems, a simple modeling approach for quantifying Log10 reduction across a free chlorine disinfection contactor was developed. The study was undertaken in three stages: firstly, review of the laboratory studies published in the literature; secondly, development of a conceptual approach to apply the laboratory studies to full-scale conditions; and finally implementation of the calculations for a hypothetical case study system. The developed model explicitly accounted for variability in residence time and pathogen specific chlorine sensitivity. Survival functions were constructed for a range of pathogens relying on the upper bound of the reported data transformed to a common metric. The application of the model within a hypothetical case study demonstrated the importance of accounting for variable residence time in QMRA. While the overall Log10 reduction may appear high, small parcels of water with short residence time can compromise the overall performance of the barrier. While theoretically simple, the approach presented is of great value for undertaking an initial assessment of a full-scale disinfection contactor based on limited site-specific information.

  2. Computational and Spectroscopic Investigations of the Molecular Scale Structure and Dynamics of Geologically Important Fluids and Mineral-Fluid Interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. James Kirkpatrick; Andrey G. Kalinichev

    2008-11-25

    Research supported by this grant focuses on molecular scale understanding of central issues related to the structure and dynamics of geochemically important fluids, fluid-mineral interfaces, and confined fluids using computational modeling and experimental methods. Molecular scale knowledge about fluid structure and dynamics, how these are affected by mineral surfaces and molecular-scale (nano-) confinement, and how water molecules and dissolved species interact with surfaces is essential to understanding the fundamental chemistry of a wide range of low-temperature geochemical processes, including sorption and geochemical transport. Our principal efforts are devoted to continued development of relevant computational approaches, application of these approaches tomore » important geochemical questions, relevant NMR and other experimental studies, and application of computational modeling methods to understanding the experimental results. The combination of computational modeling and experimental approaches is proving highly effective in addressing otherwise intractable problems. In 2006-2007 we have significantly advanced in new, highly promising research directions along with completion of on-going projects and final publication of work completed in previous years. New computational directions are focusing on modeling proton exchange reactions in aqueous solutions using ab initio molecular dynamics (AIMD), metadynamics (MTD), and empirical valence bond (EVB) approaches. Proton exchange is critical to understanding the structure, dynamics, and reactivity at mineral-water interfaces and for oxy-ions in solution, but has traditionally been difficult to model with molecular dynamics (MD). Our ultimate objective is to develop this capability, because MD is much less computationally demanding than quantum-chemical approaches. We have also extended our previous MD simulations of metal binding to natural organic matter (NOM) to a much longer time scale (up to 10 ns) for significantly larger systems. These calculations have allowed us, for the first time, to study the effects of metal cations with different charges and charge density on the NOM aggregation in aqueous solutions. Other computational work has looked at the longer-time-scale dynamical behavior of aqueous species at mineral-water interfaces investigated simultaneously by NMR spectroscopy. Our experimental NMR studies have focused on understanding the structure and dynamics of water and dissolved species at mineral-water interfaces and in two-dimensional nano-confinement within clay interlayers. Combined NMR and MD study of H2O, Na+, and Cl- interactions with the surface of quartz has direct implications regarding interpretation of sum frequency vibrational spectroscopic experiments for this phase and will be an important reference for future studies. We also used NMR to examine the behavior of K+ and H2O in the interlayer and at the surfaces of the clay minerals hectorite and illite-rich illite-smectite. This the first time K+ dynamics has been characterized spectroscopically in geochemical systems. Preliminary experiments were also performed to evaluate the potential of 75As NMR as a probe of arsenic geochemical behavior. The 75As NMR study used advanced signal enhancement methods, introduced a new data acquisition approach to minimize the time investment in ultra-wide-line NMR experiments, and provides the first evidence of a strong relationship between the chemical shift and structural parameters for this experimentally challenging nucleus. We have also initiated a series of inelastic and quasi-elastic neutron scattering measurements of water dynamics in the interlayers of clays and layered double hydroxides. The objective of these experiments is to probe the correlations of water molecular motions in confined spaces over the scale of times and distances most directly comparable to our MD simulations and on a time scale different than that probed by NMR. This work is being done in collaboration with Drs. C.-K. Loong, N. de Souza, and A.I. Kolesnikov at the Intense Pulsed Neutron Source facility of the Argonne National Lab, and Dr. A. Faraone at the NIST Center for Neutron Research. A manuscript reporting the first results of these experiments, which are highly complimentary to our previous NMR, X-ray, and infra-red results for these phases, is currently in preparation. In total, in 2006-2007 our work has resulted in the publication of 14 peer-reviewed research papers. We also devoted considerable effort to making our work known to a wide range of researchers, as indicated by the 24 contributed abstracts and 14 invited presentations.« less

  3. Decadal-Scale Forecasting of Climate Drivers for Marine Applications.

    PubMed

    Salinger, J; Hobday, A J; Matear, R J; O'Kane, T J; Risbey, J S; Dunstan, P; Eveson, J P; Fulton, E A; Feng, M; Plagányi, É E; Poloczanska, E S; Marshall, A G; Thompson, P A

    Climate influences marine ecosystems on a range of time scales, from weather-scale (days) through to climate-scale (hundreds of years). Understanding of interannual to decadal climate variability and impacts on marine industries has received less attention. Predictability up to 10 years ahead may come from large-scale climate modes in the ocean that can persist over these time scales. In Australia the key drivers of climate variability affecting the marine environment are the Southern Annular Mode, the Indian Ocean Dipole, the El Niño/Southern Oscillation, and the Interdecadal Pacific Oscillation, each has phases that are associated with different ocean circulation patterns and regional environmental variables. The roles of these drivers are illustrated with three case studies of extreme events-a marine heatwave in Western Australia, a coral bleaching of the Great Barrier Reef, and flooding in Queensland. Statistical and dynamical approaches are described to generate forecasts of climate drivers that can subsequently be translated to useful information for marine end users making decisions at these time scales. Considerable investment is still needed to support decadal forecasting including improvement of ocean-atmosphere models, enhancement of observing systems on all scales to support initiation of forecasting models, collection of important biological data, and integration of forecasts into decision support tools. Collaboration between forecast developers and marine resource sectors-fisheries, aquaculture, tourism, biodiversity management, infrastructure-is needed to support forecast-based tactical and strategic decisions that reduce environmental risk over annual to decadal time scales. © 2016 Elsevier Ltd. All rights reserved.

  4. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    PubMed

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  5. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  6. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fingersh, Lee J; Loth, Eric; Kaminski, Meghan

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3more » wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.« less

  7. Scale-by-scale contributions to Lagrangian particle acceleration

    NASA Astrophysics Data System (ADS)

    Lalescu, Cristian C.; Wilczek, Michael

    2017-11-01

    Fluctuations on a wide range of scales in both space and time are characteristic of turbulence. Lagrangian particles, advected by the flow, probe these fluctuations along their trajectories. In an effort to isolate the influence of the different scales on Lagrangian statistics, we employ direct numerical simulations (DNS) combined with a filtering approach. Specifically, we study the acceleration statistics of tracers advected in filtered fields to characterize the smallest temporal scales of the flow. Emphasis is put on the acceleration variance as a function of filter scale, along with the scaling properties of the relevant terms of the Navier-Stokes equations. We furthermore discuss scaling ranges for higher-order moments of the tracer acceleration, as well as the influence of the choice of filter on the results. Starting from the Lagrangian tracer acceleration as the short time limit of the Lagrangian velocity increment, we also quantify the influence of filtering on Lagrangian intermittency. Our work complements existing experimental results on intermittency and accelerations of finite-sized, neutrally-buoyant particles: for the passive tracers used in our DNS, feedback effects are neglected such that the spatial averaging effect is cleanly isolated.

  8. Effective pore-scale dispersion upscaling with a correlated continuous time random walk approach

    NASA Astrophysics Data System (ADS)

    Le Borgne, T.; Bolster, D.; Dentz, M.; de Anna, P.; Tartakovsky, A.

    2011-12-01

    We investigate the upscaling of dispersion from a pore-scale analysis of Lagrangian velocities. A key challenge in the upscaling procedure is to relate the temporal evolution of spreading to the pore-scale velocity field properties. We test the hypothesis that one can represent Lagrangian velocities at the pore scale as a Markov process in space. The resulting effective transport model is a continuous time random walk (CTRW) characterized by a correlated random time increment, here denoted as correlated CTRW. We consider a simplified sinusoidal wavy channel model as well as a more complex heterogeneous pore space. For both systems, the predictions of the correlated CTRW model, with parameters defined from the velocity field properties (both distribution and correlation), are found to be in good agreement with results from direct pore-scale simulations over preasymptotic and asymptotic times. In this framework, the nontrivial dependence of dispersion on the pore boundary fluctuations is shown to be related to the competition between distribution and correlation effects. In particular, explicit inclusion of spatial velocity correlation in the effective CTRW model is found to be important to represent incomplete mixing in the pore throats.

  9. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  10. Multi-scale Visualization of Molecular Architecture Using Real-Time Ambient Occlusion in Sculptor.

    PubMed

    Wahle, Manuel; Wriggers, Willy

    2015-10-01

    The modeling of large biomolecular assemblies relies on an efficient rendering of their hierarchical architecture across a wide range of spatial level of detail. We describe a paradigm shift currently under way in computer graphics towards the use of more realistic global illumination models, and we apply the so-called ambient occlusion approach to our open-source multi-scale modeling program, Sculptor. While there are many other higher quality global illumination approaches going all the way up to full GPU-accelerated ray tracing, they do not provide size-specificity of the features they shade. Ambient occlusion is an aspect of global lighting that offers great visual benefits and powerful user customization. By estimating how other molecular shape features affect the reception of light at some surface point, it effectively simulates indirect shadowing. This effect occurs between molecular surfaces that are close to each other, or in pockets such as protein or ligand binding sites. By adding ambient occlusion, large macromolecular systems look much more natural, and the perception of characteristic surface features is strongly enhanced. In this work, we present a real-time implementation of screen space ambient occlusion that delivers realistic cues about tunable spatial scale characteristics of macromolecular architecture. Heretofore, the visualization of large biomolecular systems, comprising e.g. hundreds of thousands of atoms or Mega-Dalton size electron microscopy maps, did not take into account the length scales of interest or the spatial resolution of the data. Our approach has been uniquely customized with shading that is tuned for pockets and cavities of a user-defined size, making it useful for visualizing molecular features at multiple scales of interest. This is a feature that none of the conventional ambient occlusion approaches provide. Actual Sculptor screen shots illustrate how our implementation supports the size-dependent rendering of molecular surface features.

  11. TOMOGRAPHY OF PLASMA FLOWS IN THE UPPER SOLAR CONVECTION ZONE USING TIME-DISTANCE INVERSION COMBINING RIDGE AND PHASE-SPEED FILTERING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svanda, Michal, E-mail: michal@astronomie.cz; Astronomical Institute, Charles University in Prague, Faculty of Mathematics and Physics, V Holesovickach 2, CZ-18000 Prague 8

    2013-09-20

    The consistency of time-distance inversions for horizontal components of the plasma flow on supergranular scales in the upper solar convection zone is checked by comparing the results derived using two k-{omega} filtering procedures-ridge filtering and phase-speed filtering-commonly used in time-distance helioseismology. I show that both approaches result in similar flow estimates when finite-frequency sensitivity kernels are used. I further demonstrate that the performance of the inversion improves (in terms of a simultaneously better averaging kernel and a lower noise level) when the two approaches are combined together in one inversion. Using the combined inversion, I invert for horizontal flows inmore » the upper 10 Mm of the solar convection zone. The flows connected with supergranulation seem to be coherent only for the top {approx}5 Mm; deeper down there is a hint of change of the convection scales toward structures larger than supergranules.« less

  12. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  13. Lagrangian Statistics and Intermittency in Gulf of Mexico.

    PubMed

    Lin, Liru; Zhuang, Wei; Huang, Yongxiang

    2017-12-12

    Due to the nonlinear interaction between different flow patterns, for instance, ocean current, meso-scale eddies, waves, etc, the movement of ocean is extremely complex, where a multiscale statistics is then relevant. In this work, a high time-resolution velocity with a time step 15 minutes obtained by the Lagrangian drifter deployed in the Gulf of Mexico (GoM) from July 2012 to October 2012 is considered. The measured Lagrangian velocity correlation function shows a strong daily cycle due to the diurnal tidal cycle. The estimated Fourier power spectrum E(f) implies a dual-power-law behavior which is separated by the daily cycle. The corresponding scaling exponents are close to -1.75 and -2.75 respectively for the time scale larger (resp. 0.1 ≤ f ≤ 0.4 day -1 ) and smaller (resp. 2 ≤ f ≤ 8 day -1 ) than 1 day. A Hilbert-based approach is then applied to this data set to identify the possible multifractal property of the cascade process. The results show an intermittent dynamics for the time scale larger than 1 day, while a less intermittent dynamics for the time scale smaller than 1 day. It is speculated that the energy is partially injected via the diurnal tidal movement and then transferred to larger and small scales through a complex cascade process, which needs more studies in the near future.

  14. Sound radiation from a subsonic rotor subjected to turbulence

    NASA Technical Reports Server (NTRS)

    Sevik, M.

    1974-01-01

    The broadband sound radiated by a subsonic rotor subjected to turbulence in the approach stream has been analyzed. The power spectral density of the sound intensity has been found to depend on a characteristic time scale-namely, the integral scale of the turbulence divided by the axial flow velocity-as well as several length-scale ratios. These consist of the ratio of the integral scale to the acoustic wavelength, rotor radius, and blade chord. Due to the simplified model chosen, only a limited number of cascade parameters appear. Limited comparisons with experimental data indicate good agreement with predicted values.

  15. Land use change impacts on floods at the catchment scale: Challenges and opportunities for future research

    NASA Astrophysics Data System (ADS)

    Rogger, M.; Agnoletti, M.; Alaoui, A.; Bathurst, J. C.; Bodner, G.; Borga, M.; Chaplot, V.; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, J. N.; Robinson, M.; Salinas, J. L.; Santoro, A.; Szolgay, J.; Tron, S.; van den Akker, J. J. H.; Viglione, A.; Blöschl, G.

    2017-07-01

    Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes across time scales, long-term experiments on physical-chemical-biological process interactions, and a focus on connectivity and patterns across spatial scales. It is suggested that these strategies will stimulate new research that coherently addresses the issues across hydrology, soil and agricultural sciences, forest engineering, forest ecology, and geomorphology.

  16. Land use change impacts on floods at the catchment scale: Challenges and opportunities for future research

    PubMed Central

    Agnoletti, M.; Alaoui, A.; Bathurst, J. C.; Bodner, G.; Borga, M.; Chaplot, V.; Gallart, F.; Glatzel, G.; Hall, J.; Holden, J.; Holko, L.; Horn, R.; Kiss, A.; Kohnová, S.; Leitinger, G.; Lennartz, B.; Parajka, J.; Perdigão, R.; Peth, S.; Plavcová, L.; Quinton, J. N.; Robinson, M.; Salinas, J. L.; Santoro, A.; Szolgay, J.; Tron, S.; van den Akker, J. J. H.; Viglione, A.; Blöschl, G.

    2017-01-01

    Abstract Research gaps in understanding flood changes at the catchment scale caused by changes in forest management, agricultural practices, artificial drainage, and terracing are identified. Potential strategies in addressing these gaps are proposed, such as complex systems approaches to link processes across time scales, long‐term experiments on physical‐chemical‐biological process interactions, and a focus on connectivity and patterns across spatial scales. It is suggested that these strategies will stimulate new research that coherently addresses the issues across hydrology, soil and agricultural sciences, forest engineering, forest ecology, and geomorphology. PMID:28919651

  17. GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales

    USGS Publications Warehouse

    He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.

    2000-01-01

    To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.

  18. The Role of Time-Scales in Socio-hydrology

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter; Sivapalan, Murugesu

    2016-04-01

    Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time scale interactions and the coevolution of humans and water. Water Resour. Res., 51, 6988-7022, doi:10.1002/2015WR017896.

  19. Multiscale Modeling of Human-Water Interactions: The Role of Time-Scales

    NASA Astrophysics Data System (ADS)

    Bloeschl, G.; Sivapalan, M.

    2015-12-01

    Much of the interest in hydrological modeling in the past decades revolved around resolving spatial variability. With the rapid changes brought about by human impacts on the hydrologic cycle, there is now an increasing need to refocus on time dependency. We present a co-evolutionary view of hydrologic systems, in which every part of the system including human systems, co-evolve, albeit at different rates. The resulting coupled human-nature system is framed as a dynamical system, characterized by interactions of fast and slow time scales and feedbacks between environmental and social processes. This gives rise to emergent phenomena such as the levee effect, adaptation to change and system collapse due to resource depletion. Changing human values play a key role in the emergence of these phenomena and should therefore be considered as internal to the system in a dynamic way. The co-evolutionary approach differs from the traditional view of water resource systems analysis as it allows for path dependence, multiple equilibria, lock-in situations and emergent phenomena. The approach may assist strategic water management for long time scales through facilitating stakeholder participation, exploring the possibility space of alternative futures, and helping to synthesise the observed dynamics of different case studies. Future research opportunities include the study of how changes in human values are connected to human-water interactions, historical analyses of trajectories of system co-evolution in individual places and comparative analyses of contrasting human-water systems in different climate and socio-economic settings. Reference Sivapalan, M. and G. Blöschl (2015) Time Scale Interactions and the Co-evolution of Humans and Water. Water Resour. Res., 51, in press.

  20. An approach for estimating item sensitivity to within-person change over time: An illustration using the Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog).

    PubMed

    Dowling, N Maritza; Bolt, Daniel M; Deng, Sien

    2016-12-01

    When assessments are primarily used to measure change over time, it is important to evaluate items according to their sensitivity to change, specifically. Items that demonstrate good sensitivity to between-person differences at baseline may not show good sensitivity to change over time, and vice versa. In this study, we applied a longitudinal factor model of change to a widely used cognitive test designed to assess global cognitive status in dementia, and contrasted the relative sensitivity of items to change. Statistically nested models were estimated introducing distinct latent factors related to initial status differences between test-takers and within-person latent change across successive time points of measurement. Models were estimated using all available longitudinal item-level data from the Alzheimer's Disease Assessment Scale-Cognitive subscale, including participants representing the full-spectrum of disease status who were enrolled in the multisite Alzheimer's Disease Neuroimaging Initiative. Five of the 13 Alzheimer's Disease Assessment Scale-Cognitive items demonstrated noticeably higher loadings with respect to sensitivity to change. Attending to performance change on only these 5 items yielded a clearer picture of cognitive decline more consistent with theoretical expectations in comparison to the full 13-item scale. Items that show good psychometric properties in cross-sectional studies are not necessarily the best items at measuring change over time, such as cognitive decline. Applications of the methodological approach described and illustrated in this study can advance our understanding regarding the types of items that best detect fine-grained early pathological changes in cognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1994-01-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 10(exp 8) for the planetary boundary layer and Re approximately equals 10(exp 14) for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re(exp 9/4) exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  2. Large Eddy simulation of turbulence: A subgrid scale model including shear, vorticity, rotation, and buoyancy

    NASA Astrophysics Data System (ADS)

    Canuto, V. M.

    1994-06-01

    The Reynolds numbers that characterize geophysical and astrophysical turbulence (Re approximately equals 108 for the planetary boundary layer and Re approximately equals 1014 for the Sun's interior) are too large to allow a direct numerical simulation (DNS) of the fundamental Navier-Stokes and temperature equations. In fact, the spatial number of grid points N approximately Re9/4 exceeds the computational capability of today's supercomputers. Alternative treatments are the ensemble-time average approach, and/or the volume average approach. Since the first method (Reynolds stress approach) is largely analytical, the resulting turbulence equations entail manageable computational requirements and can thus be linked to a stellar evolutionary code or, in the geophysical case, to general circulation models. In the volume average approach, one carries out a large eddy simulation (LES) which resolves numerically the largest scales, while the unresolved scales must be treated theoretically with a subgrid scale model (SGS). Contrary to the ensemble average approach, the LES+SGS approach has considerable computational requirements. Even if this prevents (for the time being) a LES+SGS model to be linked to stellar or geophysical codes, it is still of the greatest relevance as an 'experimental tool' to be used, inter alia, to improve the parameterizations needed in the ensemble average approach. Such a methodology has been successfully adopted in studies of the convective planetary boundary layer. Experienc e with the LES+SGS approach from different fields has shown that its reliability depends on the healthiness of the SGS model for numerical stability as well as for physical completeness. At present, the most widely used SGS model, the Smagorinsky model, accounts for the effect of the shear induced by the large resolved scales on the unresolved scales but does not account for the effects of buoyancy, anisotropy, rotation, and stable stratification. The latter phenomenon, which affects both geophysical and astrophysical turbulence (e.g., oceanic structure and convective overshooting in stars), has been singularly difficult to account for in turbulence modeling. For example, the widely used model of Deardorff has not been confirmed by recent LES results. As of today, there is no SGS model capable of incorporating buoyancy, rotation, shear, anistropy, and stable stratification (gravity waves). In this paper, we construct such a model which we call CM (complete model). We also present a hierarchy of simpler algebraic models (called AM) of varying complexity. Finally, we present a set of models which are simplified even further (called SM), the simplest of which is the Smagorinsky-Lilly model. The incorporation of these models into the presently available LES codes should begin with the SM, to be followed by the AM and finally by the CM.

  3. A New Dimensionless Number for Redox Conditions within the Hyporheic Zone: Morphological and Biogeochemical Controls

    NASA Astrophysics Data System (ADS)

    Marzadri, A.; Tonina, D.; Bellin, A.

    2012-12-01

    We introduce a new Damköhler number, Da, to quantify the biogeochemical status of the hyporheic zone and to upscale local hyporheic processes to reach scale. Da is defined as the ratio between the median hyporheic residence time, τup,50, which is a representative time scale of the hyporheic flow, and a representative time scale of biogeochemical reactions, which we define as the time τlim needed to consume dissolved oxygen to a prescribed threshold concentration below which reducing reactions are activated: Da = τup,50/τlim. This approach accounts for streambed topography and surface hydraulics via the hyporheic residence time and biogeochemical reaction via the time limit τlim. Da can readily evaluate the redox status of the hyporheic zone. Values of Da larger than 1 indicate prevailing anaerobic conditions, whereas values smaller than 1 prevailing aerobic conditions. This new Damköhler number can quantify the efficiency of hyporheic zone in transforming dissolved inorganic nitrogen species such as ammonium and nitrate, whose transformation depends on the redox condition of the hyporheic zone. We define a particular value of Da, Das, that indicates when the hyporheic zone is a source or a sink of nitrate. This index depends only on the relative abundance of ammonium and nitrate. The approach can be applied to any hyporheic zone of which the median hyporheic residence time is known. Application to streams with pool-riffle morphology shows that Da increases passing from small to large streams implying that the fraction of the hyporheic zone in anaerobic conditions increases with stream size.

  4. Methane Emissions from Reservoirs: Assessing the Magnitude and Developing Mitigation Approaches

    EPA Science Inventory

    Although methane can be emitted from a number of natural sources, it is the second most important greenhouse gas emitted from human-related activities and has a heat trapping capacity 34 times greater than that of carbon dioxide on a 100 year time scale. The U.S. Greenhouse Gas I...

  5. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  6. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  7. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  8. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  9. Frequency Preference Response to Oscillatory Inputs in Two-dimensional Neural Models: A Geometric Approach to Subthreshold Amplitude and Phase Resonance.

    PubMed

    Rotstein, Horacio G

    2014-01-01

    We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.

  10. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  11. Therapeutic conversation to improve mood in nursing home residents with Alzheimer's disease.

    PubMed

    Tappen, Ruth M; Williams, Christine L

    2009-10-01

    Few studies have tested strategies to address the mental health needs of individuals with Alzheimer's disease (AD). To test a newly developed, empirically based modified counseling approach, 30 nursing home residents with AD were randomly assigned to a modified counseling (Therapeutic Conversation) treatment group or usual care control group. Mini-Mental State Examination mean scores were 10.60 (SD = 6.99) for the treatment group and 12.26 (SD = 7.43) for the control group. Individual treatment was provided three times per week for 16 weeks. On the posttest, treatment group participants evidenced significantly less negative mood than the control group on the Montgomery-Asberg Depression Rating Scale and the Sadness and Apathy subscales of the Alzheimer's Disease and Related Disorders Mood Scale. The differences approached significance on the Dementia Mood Assessment Scale. Results suggest that a therapeutic counseling approach can be effective in treating the dysphoria commonly found in individuals with AD. Copyright 2009, SLACK Incorporated.

  12. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  13. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  14. On the development and global oscillations of cometary ionospheres

    NASA Technical Reports Server (NTRS)

    Houpis, H. L. F.; Mendis, D. A.

    1981-01-01

    Representing the cometary ionosphere by a single fluid model characterized by an average ionization time scale, both the ionosphere's development as a comet approaches the sun and its response to sudden changes in solar wind conditions are investigated. Three different nuclear sizes (small, average, very large) and three different modes of energy addition to the atmosphere (adiabatic, isothermal, suprathermal) are considered. It is found that the crucial parameter determining both the nature and the size of the ionosphere is the average ionization time scale within the ionosphere. Two different scales are identified. It is noted that the ionosphere can also be characterized by the relative sizes of three different scale lengths: the neutral standoff distance from the nucleus, the ion standoff distance from the nucleus, and the nuclear distance at which the ions and the neutrals decouple collisionally.

  15. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  16. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  17. Synthetic Approaches to the New Drugs Approved During 2015.

    PubMed

    Flick, Andrew C; Ding, Hong X; Leverett, Carolyn A; Kyne, Robert E; Liu, Kevin K-C; Fink, Sarah J; O'Donnell, Christopher J

    2017-08-10

    New drugs introduced to the market every year represent privileged structures for particular biological targets. These new chemical entities (NCEs) provide insight into molecular recognition while serving as leads for designing future new drugs. This annual review describes the most likely process-scale synthetic approaches to 29 new chemical entities (NCEs) that were approved for the first time in 2015.

  18. Intrinsic Multi-Scale Dynamic Behaviors of Complex Financial Systems

    PubMed Central

    Ouyang, Fang-Yan; Zheng, Bo; Jiang, Xiong-Fei

    2015-01-01

    The empirical mode decomposition is applied to analyze the intrinsic multi-scale dynamic behaviors of complex financial systems. In this approach, the time series of the price returns of each stock is decomposed into a small number of intrinsic mode functions, which represent the price motion from high frequency to low frequency. These intrinsic mode functions are then grouped into three modes, i.e., the fast mode, medium mode and slow mode. The probability distribution of returns and auto-correlation of volatilities for the fast and medium modes exhibit similar behaviors as those of the full time series, i.e., these characteristics are rather robust in multi time scale. However, the cross-correlation between individual stocks and the return-volatility correlation are time scale dependent. The structure of business sectors is mainly governed by the fast mode when returns are sampled at a couple of days, while by the medium mode when returns are sampled at dozens of days. More importantly, the leverage and anti-leverage effects are dominated by the medium mode. PMID:26427063

  19. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    NASA Astrophysics Data System (ADS)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  20. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  1. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  2. Learning about the scale of the solar system using digital planetarium visualizations

    NASA Astrophysics Data System (ADS)

    Yu, Ka Chun; Sahami, Kamran; Dove, James

    2017-07-01

    We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.

  3. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    NASA Astrophysics Data System (ADS)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  4. Integration of Biological and Physical Sciences to Advance Ecological Understanding of Aquatic Ecosystems

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Buffington, J. M.; Rieman, B. E.; Dunham, J. B.; McKean, J. A.; Thurow, R. F.; Gutierrez-Teira, B.; Rosenberger, A. E.

    2005-05-01

    Conservation and restoration of freshwater stream and river habitats are important goals for land management and natural resources research. Several examples of research have emerged showing that many species are adapted to temporary habitat disruptions, but that these adaptations are sensitive to the spatial grain and extent of disturbance as well as to its duration. When viewed from this perspective, questions of timing, spatial pattern, and relevant scales emerge as critical issues. In contrast, much regulation, management, and research remains tied to pollutant loading paradigms that are insensitive to either time or space scales. It is becoming clear that research is needed to examine questions and hypotheses about how physical processes affect ecological processes. Two overarching questions concisely frame the scientific issues: 1) How do we quantify physical watershed processes in a way that is meaningful to biological and ecological processes, and 2) how does the answer to that question vary with changing spatial and temporal scales? A joint understanding of scaling characteristics of physical process and the plasticity of aquatic species will be needed to accomplish this research; hence a strong need exists for integrative and collaborative development. Considering conservation biology problems in this fashion can lead to creative and non-obvious solutions because the integrated system has important non-linearities and feedbacks related to a biological system that has responded to substantial natural variability in the past. We propose that research beginning with ecological theories and principles followed with a structured examination of each physical process as related to the specific ecological theories is a strong approach to developing the necessary science, and such an approach may form a basis for development of scaling theories of hydrologic and geomorphic process. We illustrate the approach with several examples.

  5. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it

    2012-12-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less

  6. An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH

    NASA Astrophysics Data System (ADS)

    Lee, D.; Gopal, S.; Mohapatra, P.

    2012-07-01

    We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.

  7. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  8. Airframe Noise Prediction of a Full Aircraft in Model and Full Scale Using a Lattice Boltzmann Approach

    NASA Technical Reports Server (NTRS)

    Fares, Ehab; Duda, Benjamin; Khorrami, Mehdi R.

    2016-01-01

    Unsteady flow computations are presented for a Gulfstream aircraft model in landing configuration, i.e., flap deflected 39deg and main landing gear deployed. The simulations employ the lattice Boltzmann solver PowerFLOW(Trademark) to simultaneously capture the flow physics and acoustics in the near field. Sound propagation to the far field is obtained using a Ffowcs Williams and Hawkings acoustic analogy approach. Two geometry representations of the same aircraft are analyzed: an 18% scale, high-fidelity, semi-span model at wind tunnel Reynolds number and a full-scale, full-span model at half-flight Reynolds number. Previously published and newly generated model-scale results are presented; all full-scale data are disclosed here for the first time. Reynolds number and geometrical fidelity effects are carefully examined to discern aerodynamic and aeroacoustic trends with a special focus on the scaling of surface pressure fluctuations and farfield noise. An additional study of the effects of geometrical detail on farfield noise is also documented. The present investigation reveals that, overall, the model-scale and full-scale aeroacoustic results compare rather well. Nevertheless, the study also highlights that finer geometrical details that are typically not captured at model scales can have a non-negligible contribution to the farfield noise signature.

  9. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  10. Continuous Precipitation of Ceria Nanoparticles from a Continuous Flow Micromixer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, Chih Heng; Paul, Brian; Chang, Chih-hung

    2013-01-01

    Cerium oxide nanoparticles were continuously precipitated from a solution of cerium(III) nitrate and ammonium hydroxide using a micro-scale T-mixer. Findings show that the method of mixing is important in the ceria precipitation process. In batch mixing and deposition, disintegration and agglomeration dominates the deposited film. In T-mixing and deposition, more uniform nanorod particles are attainable. In addition, it was found that the micromixing approach reduced the exposure of the Ce(OH)3 precipates to oxygen, yielding hydroxide precipates in place of CeO2 precipitates. Advantages of the micro-scale T-mixing approach include shorter mixing times, better control of nanoparticle shape and less agglomeration.

  11. Downscaling near-surface soil moisture from field to plot scale: A comparative analysis under different environmental conditions

    NASA Astrophysics Data System (ADS)

    Nasta, Paolo; Penna, Daniele; Brocca, Luca; Zuecco, Giulia; Romano, Nunzio

    2018-02-01

    Indirect measurements of field-scale (hectometer grid-size) spatial-average near-surface soil moisture are becoming increasingly available by exploiting new-generation ground-based and satellite sensors. Nonetheless, modeling applications for water resources management require knowledge of plot-scale (1-5 m grid-size) soil moisture by using measurements through spatially-distributed sensor network systems. Since efforts to fulfill such requirements are not always possible due to time and budget constraints, alternative approaches are desirable. In this study, we explore the feasibility of determining spatial-average soil moisture and soil moisture patterns given the knowledge of long-term records of climate forcing data and topographic attributes. A downscaling approach is proposed that couples two different models: the Eco-Hydrological Bucket and Equilibrium Moisture from Topography. This approach helps identify the relative importance of two compound topographic indexes in explaining the spatial variation of soil moisture patterns, indicating valley- and hillslope-dependence controlled by lateral flow and radiative processes, respectively. The integrated model also detects temporal instability if the dominant type of topographic dependence changes with spatial-average soil moisture. Model application was carried out at three sites in different parts of Italy, each characterized by different environmental conditions. Prior calibration was performed by using sparse and sporadic soil moisture values measured by portable time domain reflectometry devices. Cross-site comparisons offer different interpretations in the explained spatial variation of soil moisture patterns, with time-invariant valley-dependence (site in northern Italy) and hillslope-dependence (site in southern Italy). The sources of soil moisture spatial variation at the site in central Italy are time-variant within the year and the seasonal change of topographic dependence can be conveniently correlated to a climate indicator such as the aridity index.

  12. On the Gompertzian growth in the fractal space-time.

    PubMed

    Molski, Marcin; Konarski, Jerzy

    2008-06-01

    An analytical approach to determination of time-dependent temporal fractal dimension b(t)(t) and scaling factor a(t)(t) for the Gompertzian growth in the fractal space-time is presented. The derived formulae take into account the proper boundary conditions and permit a calculation of the mean values b(t)(t) and a(t)(t) at any period of time. The formulae derived have been tested on experimental data obtained by Schrek for the Brown-Pearce rabbit's tumor growth. The results obtained confirm a possibility of successful mapping of the experimental Gompertz curve onto the fractal power-law scaling function y(t)=a(t)tb(t) and support a thesis that Gompertzian growth is a self-similar and allometric process of a holistic nature.

  13. Progress testing in the medical curriculum: students' approaches to learning and perceived stress.

    PubMed

    Chen, Yan; Henning, Marcus; Yielder, Jill; Jones, Rhys; Wearn, Andy; Weller, Jennifer

    2015-09-11

    Progress Tests (PTs) draw on a common question bank to assess all students in a programme against graduate outcomes. Theoretically PTs drive deep approaches to learning and reduce assessment-related stress. In 2013, PTs were introduced to two year groups of medical students (Years 2 and 4), whereas students in Years 3 and 5 were taking traditional high-stakes assessments. Staged introduction of PTs into our medical curriculum provided a time-limited opportunity for a comparative study. The main purpose of the current study was to compare the impact of PTs on undergraduate medical students' approaches to learning and perceived stress with that of traditional high-stakes assessments. We also aimed to investigate the associations between approaches to learning, stress and PT scores. Undergraduate medical students (N = 333 and N = 298 at Time 1 and Time 2 respectively) answered the Revised Study Process Questionnaire (R-SPQ-2F) and the Perceived Stress Scale (PSS) at two time points to evaluate change over time. The R-SPQ-2F generated a surface approach and a deep approach score; the PSS generated an overall perceived stress score. We found no significant differences between the two groups in approaches to learning at either time point, and no significant changes in approaches to learning over time in either cohort. Levels of stress increased significantly at the end of the year (Time 2) for students in the traditional assessment cohort, but not in the PT cohort. In the PT cohort, surface approach to learning, but not stress, was a significant negative predictor of students' PT scores. While confirming an association between surface approaches to learning and lower PT scores, we failed to demonstrate an effect of PTs on approaches to learning. However, a reduction in assessment-associated stress is an important finding.

  14. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  15. A multiple scales approach to sound generation by vibrating bodies

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Pope, Dennis S.

    1992-01-01

    The problem of determining the acoustic field in an inviscid, isentropic fluid generated by a solid body whose surface executes prescribed vibrations is formulated and solved as a multiple scales perturbation problem, using the Mach number M based on the maximum surface velocity as the perturbation parameter. Following the idea of multiple scales, new 'slow' spacial scales are introduced, which are defined as the usual physical spacial scale multiplied by powers of M. The governing nonlinear differential equations lead to a sequence of linear problems for the perturbation coefficient functions. However, it is shown that the higher order perturbation functions obtained in this manner will dominate the lower order solutions unless their dependence on the slow spacial scales is chosen in a certain manner. In particular, it is shown that the perturbation functions must satisfy an equation similar to Burgers' equation, with a slow spacial scale playing the role of the time-like variable. The method is illustrated by a simple one-dimenstional example, as well as by three different cases of a vibrating sphere. The results are compared with solutions obtained by purely numerical methods and some insights provided by the perturbation approach are discussed.

  16. Assessing sufficiency of thermal riverscapes for resilient ...

    EPA Pesticide Factsheets

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  17. An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D

    NASA Technical Reports Server (NTRS)

    Thompson, Kyle B.; Gnoffo, Peter A.

    2016-01-01

    An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.

  18. Ash deposits - Initiating the change from empiricism to generic engineering. Part 1: The generic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagoner, C.L.; Wessel, R.A.

    1986-01-01

    Empiricism has traditionally been used to relate laboratory and pilot-scale measurements of fuel characteristics with the design, performance, and the slagging and fouling behavior of steam generators. Currently, a new engineering approach is being evaluated. The goal is to develop and use calculations and measurements from several engineering disciplines that exceed the demonstrated limitations of present empirical techniques for predicting slagging/fouling behavior. In Part I of this paper, the generic approach to deposits and boiler performance is defined and a matrix of engineering concepts is described. General relationships are presented for assessing the effects of deposits and sootblowing on themore » real-time performance of heat transfer surfaces in pilot- and commercial-scale steam generators.« less

  19. Accurate and efficient calculation of response times for groundwater flow

    NASA Astrophysics Data System (ADS)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  20. Getting It Right Matters: Climate Spectra and Their Estimation

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor; Yushkov, Vladislav

    2018-06-01

    In many recent publications, climate spectra estimated with different methods from observed, GCM-simulated, and reconstructed time series contain many peaks at time scales from a few years to many decades and even centuries. However, respective spectral estimates obtained with the autoregressive (AR) and multitapering (MTM) methods showed that spectra of climate time series are smooth and contain no evidence of periodic or quasi-periodic behavior. Four order selection criteria for the autoregressive models were studied and proven sufficiently reliable for 25 time series of climate observations at individual locations or spatially averaged at local-to-global scales. As time series of climate observations are short, an alternative reliable nonparametric approach is Thomson's MTM. These results agree with both the earlier climate spectral analyses and the Markovian stochastic model of climate.

  1. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  2. Extracting surface waves, hum and normal modes: time-scale phase-weighted stack and beyond

    NASA Astrophysics Data System (ADS)

    Ventosa, Sergi; Schimmel, Martin; Stutzmann, Eleonore

    2017-10-01

    Stacks of ambient noise correlations are routinely used to extract empirical Green's functions (EGFs) between station pairs. The time-frequency phase-weighted stack (tf-PWS) is a physically intuitive nonlinear denoising method that uses the phase coherence to improve EGF convergence when the performance of conventional linear averaging methods is not sufficient. The high computational cost of a continuous approach to the time-frequency transformation is currently a main limitation in ambient noise studies. We introduce the time-scale phase-weighted stack (ts-PWS) as an alternative extension of the phase-weighted stack that uses complex frames of wavelets to build a time-frequency representation that is much more efficient and fast to compute and that preserve the performance and flexibility of the tf-PWS. In addition, we propose two strategies: the unbiased phase coherence and the two-stage ts-PWS methods to further improve noise attenuation, quality of the extracted signals and convergence speed. We demonstrate that these approaches enable to extract minor- and major-arc Rayleigh waves (up to the sixth Rayleigh wave train) from many years of data from the GEOSCOPE global network. Finally we also show that fundamental spheroidal modes can be extracted from these EGF.

  3. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations confirm transferability of the relationships in space and time contingent upon full representation of validation conditions in the calibration data set. The ability of the top-down space-for-time models to predict in new time periods and locations lends confidence to their application for assessments and for improving finer time scale models.

  4. Scaling Effects on Materials Tribology: From Macro to Micro Scale.

    PubMed

    Stoyanov, Pantcho; Chromik, Richard R

    2017-05-18

    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale.

  5. Scaling Effects on Materials Tribology: From Macro to Micro Scale

    PubMed Central

    Stoyanov, Pantcho; Chromik, Richard R.

    2017-01-01

    The tribological study of materials inherently involves the interaction of surface asperities at the micro to nanoscopic length scales. This is the case for large scale engineering applications with sliding contacts, where the real area of contact is made up of small contacting asperities that make up only a fraction of the apparent area of contact. This is why researchers have sought to create idealized experiments of single asperity contacts in the field of nanotribology. At the same time, small scale engineering structures known as micro- and nano-electromechanical systems (MEMS and NEMS) have been developed, where the apparent area of contact approaches the length scale of the asperities, meaning the real area of contact for these devices may be only a few asperities. This is essentially the field of microtribology, where the contact size and/or forces involved have pushed the nature of the interaction between two surfaces towards the regime where the scale of the interaction approaches that of the natural length scale of the features on the surface. This paper provides a review of microtribology with the purpose to understand how tribological processes are different at the smaller length scales compared to macrotribology. Studies of the interfacial phenomena at the macroscopic length scales (e.g., using in situ tribometry) will be discussed and correlated with new findings and methodologies at the micro-length scale. PMID:28772909

  6. Simulation of all-scale atmospheric dynamics on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng

    2016-10-01

    The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.

  7. Integrating terrestrial through aquatic processing of water, carbon and nitrogen over hot, cold and lukewarm moments in mixed land use catchments

    NASA Astrophysics Data System (ADS)

    Band, L. E.; Lin, L.; Duncan, J. M.

    2017-12-01

    A major challenge in understanding and managing freshwater volumes and quality in mixed land use catchments is the detailed heterogeneity of topography, soils, canopy, and inputs of water and biogeochemicals. The short space and time scale dynamics of sources, transport and processing of water, carbon and nitrogen in natural and built environments can have a strong influence on the timing and magnitude of watershed runoff and nutrient production, ecosystem cycling and export. Hydroclimate variability induces a functional interchange of terrestrial and aquatic environments across their transition zone with the temporal and spatial expansion and contraction of soil wetness, standing and flowing water over seasonal, diurnal and storm event time scales. Variation in sources and retention of nutrients at these scales need to be understood and represented to design optimal mitigation strategies. This paper discusses the conceptual framework used to design both simulation and measurement approaches, and explores these dynamics using an integrated terrestrial-aquatic watershed model of coupled water-carbon-nitrogen processes at resolutions necessary to resolve "hot spot/hot moment" phenomena in two well studied catchments in Long Term Ecological Research sites. The potential utility of this approach for design and assessment of urban green infrastructure and stream restoration strategies is illustrated.

  8. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  9. Time-dependent Schrödinger equation for molecular core-hole dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picón, A.

    2017-02-01

    X-ray spectroscopy is an important tool for the investigation of matter. X rays primarily interact with inner-shell electrons, creating core (inner-shell) holes that will decay on the time scale of attoseconds to a few femtoseconds through electron relaxations involving the emission of a photon or an electron. Furthermore, the advent of femtosecond x-ray pulses expands x-ray spectroscopy to the time domain and will eventually allow the control of core-hole population on time scales comparable to core-vacancy lifetimes. For both cases, a theoretical approach that accounts for the x-ray interaction while the electron relaxations occur is required. We describe a time-dependentmore » framework, based on solving the time-dependent Schrödinger equation, that is suitable for describing the induced electron and nuclear dynamics.« less

  10. Nonlinearities of heart rate variability in animal models of impaired cardiac control: contribution of different time scales.

    PubMed

    Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; Silva, Carlos Alberto Aguiar; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto

    2017-08-01

    Heart rate variability (HRV) has been extensively explored by traditional linear approaches (e.g., spectral analysis); however, several studies have pointed to the presence of nonlinear features in HRV, suggesting that linear tools might fail to account for the complexity of the HRV dynamics. Even though the prevalent notion is that HRV is nonlinear, the actual presence of nonlinear features is rarely verified. In this study, the presence of nonlinear dynamics was checked as a function of time scales in three experimental models of rats with different impairment of the cardiac control: namely, rats with heart failure (HF), spontaneously hypertensive rats (SHRs), and sinoaortic denervated (SAD) rats. Multiscale entropy (MSE) and refined MSE (RMSE) were chosen as the discriminating statistic for the surrogate test utilized to detect nonlinearity. Nonlinear dynamics is less present in HF animals at both short and long time scales compared with controls. A similar finding was found in SHR only at short time scales. SAD increased the presence of nonlinear dynamics exclusively at short time scales. Those findings suggest that a working baroreflex contributes to linearize HRV and to reduce the likelihood to observe nonlinear components of the cardiac control at short time scales. In addition, an increased sympathetic modulation seems to be a source of nonlinear dynamics at long time scales. Testing nonlinear dynamics as a function of the time scales can provide a characterization of the cardiac control complementary to more traditional markers in time, frequency, and information domains. NEW & NOTEWORTHY Although heart rate variability (HRV) dynamics is widely assumed to be nonlinear, nonlinearity tests are rarely used to check this hypothesis. By adopting multiscale entropy (MSE) and refined MSE (RMSE) as the discriminating statistic for the nonlinearity test, we show that nonlinear dynamics varies with time scale and the type of cardiac dysfunction. Moreover, as complexity metrics and nonlinearities provide complementary information, we strongly recommend using the test for nonlinearity as an additional index to characterize HRV. Copyright © 2017 the American Physiological Society.

  11. A Call to Action for Research in Digital Learning: Learning without Limits of Time, Place, Path, Pace…or Evidence

    ERIC Educational Resources Information Center

    Cavanaugh, Cathy; Sessums, Christopher; Drexler, Wendy

    2015-01-01

    This essay is a call for rethinking our approach to research in digital learning. It plots a path founded in social trends and advances in education. A brief review of these trends and advances is followed by discussion of what flattened research might look like at scale. Scaling research in digital learning is crucial to advancing understanding…

  12. Fractal rigidity in migraine

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Glaubic-Latka, Marta; Latka, Dariusz; West, Bruce J.

    2004-04-01

    We study the middle cerebral artery blood flow velocity (MCAfv) in humans using transcranial Doppler ultrasonography (TCD). Scaling properties of time series of the axial flow velocity averaged over a cardiac beat interval may be characterized by two exponents. The short time scaling exponent (STSE) determines the statistical properties of fluctuations of blood flow velocities in short-time intervals while the Hurst exponent describes the long-term fractal properties. In many migraineurs the value of the STSE is significantly reduced and may approach that of the Hurst exponent. This change in dynamical properties reflects the significant loss of short-term adaptability and the overall hyperexcitability of the underlying cerebral blood flow control system. We call this effect fractal rigidity.

  13. Information transfer across the scales of climate data variability

    NASA Astrophysics Data System (ADS)

    Palus, Milan; Jajcay, Nikola; Hartman, David; Hlinka, Jaroslav

    2015-04-01

    Multitude of scales characteristic of the climate system variability requires innovative approaches in analysis of instrumental time series. We present a methodology which starts with a wavelet decomposition of a multi-scale signal into quasi-oscillatory modes of a limited band-with, described using their instantaneous phases and amplitudes. Then their statistical associations are tested in order to search for interactions across time scales. In particular, an information-theoretic formulation of the generalized, nonlinear Granger causality is applied together with surrogate data testing methods [1]. The method [2] uncovers causal influence (in the Granger sense) and information transfer from large-scale modes of climate variability with characteristic time scales from years to almost a decade to regional temperature variability on short time scales. In analyses of daily mean surface air temperature from various European locations an information transfer from larger to smaller scales has been observed as the influence of the phase of slow oscillatory phenomena with periods around 7-8 years on amplitudes of the variability characterized by smaller temporal scales from a few months to annual and quasi-biennial scales [3]. In sea surface temperature data from the tropical Pacific area an influence of quasi-oscillatory phenomena with periods around 4-6 years on the variability on and near the annual scale has been observed. This study is supported by the Ministry of Education, Youth and Sports of the Czech Republic within the Program KONTAKT II, Project No. LH14001. [1] M. Palus, M. Vejmelka, Phys. Rev. E 75, 056211 (2007) [2] M. Palus, Entropy 16(10), 5263-5289 (2014) [3] M. Palus, Phys. Rev. Lett. 112, 078702 (2014)

  14. Scaling and design of landslide and debris-flow experiments

    USGS Publications Warehouse

    Iverson, Richard M.

    2015-01-01

    Scaling plays a crucial role in designing experiments aimed at understanding the behavior of landslides, debris flows, and other geomorphic phenomena involving grain-fluid mixtures. Scaling can be addressed by using dimensional analysis or – more rigorously – by normalizing differential equations that describe the evolving dynamics of the system. Both of these approaches show that, relative to full-scale natural events, miniaturized landslides and debris flows exhibit disproportionately large effects of viscous shear resistance and cohesion as well as disproportionately small effects of excess pore-fluid pressure that is generated by debris dilation or contraction. This behavioral divergence grows in proportion to H3, where H is the thickness of a moving mass. Therefore, to maximize geomorphological relevance, experiments with wet landslides and debris flows must be conducted at the largest feasible scales. Another important consideration is that, unlike stream flows, landslides and debris flows accelerate from statically balanced initial states. Thus, no characteristic macroscopic velocity exists to guide experiment scaling and design. On the other hand, macroscopic gravity-driven motion of landslides and debris flows evolves over a characteristic time scale (L/g)1/2, where g is the magnitude of gravitational acceleration and L is the characteristic length of the moving mass. Grain-scale stress generation within the mass occurs on a shorter time scale, H/(gL)1/2, which is inversely proportional to the depth-averaged material shear rate. A separation of these two time scales exists if the criterion H/L < < 1 is satisfied, as is commonly the case. This time scale separation indicates that steady-state experiments can be used to study some details of landslide and debris-flow behavior but cannot be used to study macroscopic landslide or debris-flow dynamics.

  15. Wafer-size free-standing single-crystalline graphene device arrays

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jing, Gaoshan; Zhang, Bo; Sando, Shota; Cui, Tianhong

    2014-08-01

    We report an approach of wafer-scale addressable single-crystalline graphene (SCG) arrays growth by using pre-patterned seeds to control the nucleation. The growth mechanism and superb properties of SCG were studied. Large array of free-standing SCG devices were realized. Characterization of SCG as nano switches shows excellent performance with life time (>22 000 times) two orders longer than that of other graphene nano switches reported so far. This work not only shows the possibility of producing wafer-scale high quality SCG device arrays but also explores the superb performance of SCG as nano devices.

  16. Physics of ultra-high bioproductivity in algal photobioreactors

    NASA Astrophysics Data System (ADS)

    Greenwald, Efrat; Gordon, Jeffrey M.; Zarmi, Yair

    2012-04-01

    Cultivating algae at high densities in thin photobioreactors engenders time scales for random cell motion that approach photosynthetic rate-limiting time scales. This synchronization allows bioproductivity above that achieved with conventional strategies. We show that a diffusion model for cell motion (1) accounts for high bioproductivity at irradiance values previously deemed restricted by photoinhibition, (2) predicts the existence of optimal culture densities and their dependence on irradiance, consistent with available data, (3) accounts for the observed degree to which mixing improves bioproductivity, and (4) provides an estimate of effective cell diffusion coefficients, in accord with independent hydrodynamic estimates.

  17. Herbal hepatotoxicity: Challenges and pitfalls of causality assessment methods

    PubMed Central

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Eickhoff, Axel

    2013-01-01

    The diagnosis of herbal hepatotoxicity or herb induced liver injury (HILI) represents a particular clinical and regulatory challenge with major pitfalls for the causality evaluation. At the day HILI is suspected in a patient, physicians should start assessing the quality of the used herbal product, optimizing the clinical data for completeness, and applying the Council for International Organizations of Medical Sciences (CIOMS) scale for initial causality assessment. This scale is structured, quantitative, liver specific, and validated for hepatotoxicity cases. Its items provide individual scores, which together yield causality levels of highly probable, probable, possible, unlikely, and excluded. After completion by additional information including raw data, this scale with all items should be reported to regulatory agencies and manufacturers for further evaluation. The CIOMS scale is preferred as tool for assessing causality in hepatotoxicity cases, compared to numerous other causality assessment methods, which are inferior on various grounds. Among these disputed methods are the Maria and Victorino scale, an insufficiently qualified, shortened version of the CIOMS scale, as well as various liver unspecific methods such as the ad hoc causality approach, the Naranjo scale, the World Health Organization (WHO) method, and the Karch and Lasagna method. An expert panel is required for the Drug Induced Liver Injury Network method, the WHO method, and other approaches based on expert opinion, which provide retrospective analyses with a long delay and thereby prevent a timely assessment of the illness in question by the physician. In conclusion, HILI causality assessment is challenging and is best achieved by the liver specific CIOMS scale, avoiding pitfalls commonly observed with other approaches. PMID:23704820

  18. Acoustic travel time gauges for in-situ determination of pressure and temperature in multi-anvil apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xuebing; Chen, Ting; Qi, Xintong

    In this study, we developed a new method for in-situ pressure determination in multi-anvil, high-pressure apparatus using an acoustic travel time approach within the framework of acoustoelasticity. The ultrasonic travel times of polycrystalline Al{sub 2}O{sub 3} were calibrated against NaCl pressure scale up to 15 GPa and 900 °C in a Kawai-type double-stage multi-anvil apparatus in conjunction with synchrotron X-radiation, thereby providing a convenient and reliable gauge for pressure determination at ambient and high temperatures. The pressures derived from this new travel time method are in excellent agreement with those from the fixed-point methods. Application of this new pressure gauge in anmore » offline experiment revealed a remarkable agreement of the densities of coesite with those from the previous single crystal compression studies under hydrostatic conditions, thus providing strong validation for the current travel time pressure scale. The travel time approach not only can be used for continuous in-situ pressure determination at room temperature, high temperatures, during compression and decompression, but also bears a unique capability that none of the previous scales can deliver, i.e., simultaneous pressure and temperature determination with a high accuracy (±0.16 GPa in pressure and ±17 °C in temperature). Therefore, the new in-situ Al{sub 2}O{sub 3} pressure gauge is expected to enable new and expanded opportunities for offline laboratory studies of solid and liquid materials under high pressure and high temperature in multi-anvil apparatus.« less

  19. Sample-independent approach to normalize two-dimensional data for orthogonality evaluation using whole separation space scaling.

    PubMed

    Jáčová, Jaroslava; Gardlo, Alžběta; Friedecký, David; Adam, Tomáš; Dimandja, Jean-Marie D

    2017-08-18

    Orthogonality is a key parameter that is used to evaluate the separation power of chromatography-based two-dimensional systems. It is necessary to scale the separation data before the assessment of the orthogonality. Current scaling approaches are sample-dependent, and the extent of the retention space that is converted into a normalized retention space is set according to the retention times of the first and last analytes contained in a unique sample to elute. The presence or absence of a highly retained analyte in a sample can thus significantly influence the amount of information (in terms of the total amount of separation space) contained in the normalized retention space considered for the calculation of the orthogonality. We propose a Whole Separation Space Scaling (WOSEL) approach that accounts for the whole separation space delineated by the analytical method, and not the sample. This approach enables an orthogonality-based evaluation of the efficiency of the analytical system that is independent of the sample selected. The WOSEL method was compared to two currently used orthogonality approaches through the evaluation of in silico-generated chromatograms and real separations of human biofluids and petroleum samples. WOSEL exhibits sample-to-sample stability values of 3.8% on real samples, compared to 7.0% and 10.1% for the two other methods, respectively. Using real analyses, we also demonstrate that some previously developed approaches can provide misleading conclusions on the overall orthogonality of a two-dimensional chromatographic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Time-marching multi-grid seismic tomography

    NASA Astrophysics Data System (ADS)

    Tong, P.; Yang, D.; Liu, Q.

    2016-12-01

    From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.

  1. An Integrated Approach to Characterizing Bypassed Oil in Heterogeneous and Fractured Reservoirs Using Partitioning Tracers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhil Datta-Gupta

    2006-12-31

    We explore the use of efficient streamline-based simulation approaches for modeling partitioning interwell tracer tests in hydrocarbon reservoirs. Specifically, we utilize the unique features of streamline models to develop an efficient approach for interpretation and history matching of field tracer response. A critical aspect here is the underdetermined and highly ill-posed nature of the associated inverse problems. We have investigated the relative merits of the traditional history matching ('amplitude inversion') and a novel travel time inversion in terms of robustness of the method and convergence behavior of the solution. We show that the traditional amplitude inversion is orders of magnitudemore » more non-linear and the solution here is likely to get trapped in local minimum, leading to inadequate history match. The proposed travel time inversion is shown to be extremely efficient and robust for practical field applications. The streamline approach is generalized to model water injection in naturally fractured reservoirs through the use of a dual media approach. The fractures and matrix are treated as separate continua that are connected through a transfer function, as in conventional finite difference simulators for modeling fractured systems. A detailed comparison with a commercial finite difference simulator shows very good agreement. Furthermore, an examination of the scaling behavior of the computation time indicates that the streamline approach is likely to result in significant savings for large-scale field applications. We also propose a novel approach to history matching finite-difference models that combines the advantage of the streamline models with the versatility of finite-difference simulation. In our approach, we utilize the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. The use of finite-difference model allows us to account for detailed process physics and compressibility effects. The approach is very fast and avoids much of the subjective judgments and time-consuming trial-and-errors associated with manual history matching. We demonstrate the power and utility of our approach using a synthetic example and two field examples. We have also explored the use of a finite difference reservoir simulator, UTCHEM, for field-scale design and optimization of partitioning interwell tracer tests. The finite-difference model allows us to include detailed physics associated with reactive tracer transport, particularly those related with transverse and cross-streamline mechanisms. We have investigated the potential use of downhole tracer samplers and also the use of natural tracers for the design of partitioning tracer tests. Finally, we discuss several alternative ways of using partitioning interwell tracer tests (PITTs) in oil fields for the calculation of oil saturation, swept pore volume and sweep efficiency, and assess the accuracy of such tests under a variety of reservoir conditions.« less

  2. Multi-Scale Long-Range Magnitude and Sign Correlations in Vertical Upward Oil-Gas-Water Three-Phase Flow

    NASA Astrophysics Data System (ADS)

    Zhao, An; Jin, Ning-de; Ren, Ying-yu; Zhu, Lei; Yang, Xia

    2016-01-01

    In this article we apply an approach to identify the oil-gas-water three-phase flow patterns in vertical upwards 20 mm inner-diameter pipe based on the conductance fluctuating signals. We use the approach to analyse the signals with long-range correlations by decomposing the signal increment series into magnitude and sign series and extracting their scaling properties. We find that the magnitude series relates to nonlinear properties of the original time series, whereas the sign series relates to the linear properties. The research shows that the oil-gas-water three-phase flows (slug flow, churn flow, bubble flow) can be classified by a combination of scaling exponents of magnitude and sign series. This study provides a new way of characterising linear and nonlinear properties embedded in oil-gas-water three-phase flows.

  3. High resolution mapping of development in the wildland-urban interface using object based image extraction.

    PubMed

    Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J

    2016-10-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.

  4. High resolution mapping of development in the wildland-urban interface using object based image extraction

    USGS Publications Warehouse

    Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.

    2016-01-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.

  5. Order reduction for a model of marine bacteriophage evolution

    NASA Astrophysics Data System (ADS)

    Pagliarini, Silvia; Korobeinikov, Andrei

    2017-02-01

    A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.

  6. Revealing the Link between Structural Relaxation and Dynamic Heterogeneity in Glass-Forming Liquids

    NASA Astrophysics Data System (ADS)

    Wang, Lijin; Xu, Ning; Wang, W. H.; Guan, Pengfei

    2018-03-01

    Despite the use of glasses for thousands of years, the nature of the glass transition is still mysterious. On approaching the glass transition, the growth of dynamic heterogeneity has long been thought to play a key role in explaining the abrupt slowdown of structural relaxation. However, it still remains elusive whether there is an underlying link between structural relaxation and dynamic heterogeneity. Here, we unravel the link by introducing a characteristic time scale hiding behind an identical dynamic heterogeneity for various model glass-forming liquids. We find that the time scale corresponds to the kinetic fragility of liquids. Moreover, it leads to scaling collapse of both the structural relaxation time and dynamic heterogeneity for all liquids studied, together with a characteristic temperature associated with the same dynamic heterogeneity. Our findings imply that studying the glass transition from the viewpoint of dynamic heterogeneity is more informative than expected.

  7. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  9. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  10. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  11. Tidal dissipation in a viscoelastic planet

    NASA Technical Reports Server (NTRS)

    Ross, M.; Schubert, G.

    1986-01-01

    Tidal dissipation is examined using Maxwell standard liner solid (SLS), and Kelvin-Voigt models, and viscosity parameters are derived from the models that yield the amount of dissipation previously calculated for a moon model with QW = 100 in a hypothetical orbit closer to the earth. The relevance of these models is then assessed for simulating planetary tidal responses. Viscosities of 10 exp 14 and 10 ex 18 Pa s for the Kelvin-Voigt and Maxwell rheologies, respectively, are needed to match the dissipation rate calculated using the Q approach with a quality factor = 100. The SLS model requires a short time viscosity of 3 x 10 exp 17 Pa s to match the Q = 100 dissipation rate independent of the model's relaxation strength. Since Q = 100 is considered a representative value for the interiors of terrestrial planets, it is proposed that derived viscosities should characterize planetary materials. However, it is shown that neither the Kelvin-Voigt nor the SLS models simulate the behavior of real planetary materials on long time scales. The Maxwell model, by contrast, behaves realistically on both long and short time scales. The inferred Maxwell viscosity, corresponding to the time scale of days, is several times smaller than the longer time scale (greater than or equal to 10 exp 14 years) viscosity of the earth's mantle.

  12. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  13. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  14. General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae).

    PubMed

    Camacho, Ernesto Robayo; Chong, Juang-Horng

    We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance.

  15. Multi-scaling modelling in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.

    2007-12-01

    In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.

  16. Solute-defect interactions in Al-Mg alloys from diffusive variational Gaussian calculations

    NASA Astrophysics Data System (ADS)

    Dontsova, E.; Rottler, J.; Sinclair, C. W.

    2014-11-01

    Resolving atomic-scale defect topologies and energetics with accurate atomistic interaction models provides access to the nonlinear phenomena inherent at atomic length and time scales. Coarse graining the dynamics of such simulations to look at the migration of, e.g., solute atoms, while retaining the rich atomic-scale detail required to properly describe defects, is a particular challenge. In this paper, we present an adaptation of the recently developed "diffusive molecular dynamics" model to describe the energetics and kinetics of binary alloys on diffusive time scales. The potential of the technique is illustrated by applying it to the classic problems of solute segregation to a planar boundary (stacking fault) and edge dislocation in the Al-Mg system. Our approach provides fully dynamical solutions in situations with an evolving energy landscape in a computationally efficient way, where atomistic kinetic Monte Carlo simulations are difficult or impractical to perform.

  17. Active Learning of Classification Models with Likert-Scale Feedback.

    PubMed

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  18. Active Learning of Classification Models with Likert-Scale Feedback

    PubMed Central

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone. PMID:28979827

  19. Sub-second thermoplastic forming of bulk metallic glasses by ultrasonic beating

    PubMed Central

    Ma, Jiang; Liang, Xiong; Wu, Xiaoyu; Liu, Zhiyuan; Gong, Feng

    2015-01-01

    The work proposed a novel thermoplastic forming approach–the ultrasonic beating forming (UBF) method for bulk metallic glasses (BMGs) in present work. The rapid forming approach can finish the thermoplastic forming of BMGs in less than one second, avoiding the time-dependent crystallization and oxidation to the most extent. Besides, the UBF is also proved to be competent in the fabrication of structures with the length scale ranging from macro scale to nano scale. Our results propose a novel route for the thermoplastic forming of BMGs and have promising applications in the rapid fabrication of macro to nano scale products and devices. PMID:26644149

  20. Testing new approaches to carbonate system simulation at the reef scale: the ReefSam model first results, application to a question in reef morphology and future challenges.

    NASA Astrophysics Data System (ADS)

    Barrett, Samuel; Webster, Jody

    2016-04-01

    Numerical simulation of the stratigraphy and sedimentology of carbonate systems (carbonate forward stratigraphic modelling - CFSM) provides significant insight into the understanding of both the physical nature of these systems and the processes which control their development. It also provides the opportunity to quantitatively test conceptual models concerning stratigraphy, sedimentology or geomorphology, and allows us to extend our knowledge either spatially (e.g. between bore holes) or temporally (forwards or backwards in time). The later is especially important in determining the likely future development of carbonate systems, particularly regarding the effects of climate change. This application, by its nature, requires successful simulation of carbonate systems on short time scales and at high spatial resolutions. Previous modelling attempts have typically focused on the scales of kilometers and kilo-years or greater (the scale of entire carbonate platforms), rather than at the scale of centuries or decades, and tens to hundreds of meters (the scale of individual reefs). Previous work has identified limitations in common approaches to simulating important reef processes. We present a new CFSM, Reef Sedimentary Accretion Model (ReefSAM), which is designed to test new approaches to simulating reef-scale processes, with the aim of being able to better simulate the past and future development of coral reefs. Four major features have been tested: 1. A simulation of wave based hydrodynamic energy with multiple simultaneous directions and intensities including wave refraction, interaction, and lateral sheltering. 2. Sediment transport simulated as sediment being moved from cell to cell in an iterative fashion until complete deposition. 3. A coral growth model including consideration of local wave energy and composition of the basement substrate (as well as depth). 4. A highly quantitative model testing approach where dozens of output parameters describing the reef morphology and development are compared with observational data. Despite being a test-bed and work in progress, ReefSAM was able to simulate the Holocene development of One Tree Reef in the Southern Great Barrier Reef (Australia) and was able to improve upon previous modelling attempts in terms of both quantitative measures and qualitative outputs, such as the presence of previously un-simulated reef features. Given the success of the model in simulating the Holocene development of OTR, we used it to quantitatively explore the effect of basement substrate depth and morphology on reef maturity/lagoonal filling (as discussed by Purdy and Gischer 2005). Initial results show a number of non-linear relationships between basement substrate depth, lagoonal filling and volume of sand produced on the reef rims and deposited in the lagoon. Lastly, further testing of the model has revealed new challenges which are likely to manifest in any attempt at reef-scale simulation. Subtly different sets of energy direction and magnitude input parameters (different in each time step but with identical probability distributions across the entire model run) resulted in a wide range of quantitative model outputs. Time step length is a likely contributing factor and the results of further testing to address this challenge will be presented.

  1. VO-ESD: a virtual observatory approach to describe the geomagnetic field temporal variations with application to Swarm data

    NASA Astrophysics Data System (ADS)

    Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Mandea, Mioara; Civet, François; Beucler, Éric

    2017-04-01

    A complete description of the main geomagnetic field temporal variation is crucial to understand dynamics in the core. This variation, termed secular variation (SV), is known with high accuracy at ground magnetic observatory locations. However the description of its spatial variability is hampered by the globally uneven distribution of the observatories. For the past two decades a global coverage of the field changes has been allowed by satellites. Their surveys of the geomagnetic field have been used to derive and improve global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. But discrepancies remain between ground measurements and field predictions by these models. Indeed, the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose a modified Virtual Observatory (VO) approach by defining a globally homogeneous mesh of VOs at satellite altitude. With this approach we directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. As satellite measurements are acquired at different altitudes a correction for the altitude is needed. Therefore, we apply an Equivalent Source Dipole (ESD) technique for each VO and each given time interval to reduce all measurements to a unique location, leading to time series similar to those available at ground magnetic observatories. Synthetic data is first used to validate the new VO-ESD approach. Then, we apply our scheme to measurements from the Swarm mission. For the first time, a 2.5 degrees resolution global mesh of VO times series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. The approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are also used to derive global SH models. Without regularization these models describe well the secular trend of the magnetic field. The derivation of longer VO-ESD time series, as more data will be made available, will allow the study of field temporal variations features such as geomagnetic jerks.

  2. Efficiency and cross-correlation in equity market during global financial crisis: Evidence from China

    NASA Astrophysics Data System (ADS)

    Ma, Pengcheng; Li, Daye; Li, Shuo

    2016-02-01

    Using one minute high-frequency data of the Shanghai Composite Index (SHCI) and the Shenzhen Composite Index (SZCI) (2007-2008), we employ the detrended fluctuation analysis (DFA) and the detrended cross correlation analysis (DCCA) with rolling window approach to observe the evolution of market efficiency and cross-correlation in pre-crisis and crisis period. Considering the fat-tail distribution of return time series, statistical test based on shuffling method is conducted to verify the null hypothesis of no long-term dependence. Our empirical research displays three main findings. First Shanghai equity market efficiency deteriorated while Shenzhen equity market efficiency improved with the advent of financial crisis. Second the highly positive dependence between SHCI and SZCI varies with time scale. Third financial crisis saw a significant increase of dependence between SHCI and SZCI at shorter time scales but a lack of significant change at longer time scales, providing evidence of contagion and absence of interdependence during crisis.

  3. Flexible versus common technology to estimate economies of scale and scope in the water and sewerage industry: an application to England and Wales.

    PubMed

    Molinos-Senante, María; Maziotis, Alexandros

    2018-05-01

    The water industry presents several structures in different countries and also within countries. Hence, several studies have been conducted to evaluate the presence of economies of scope and scale in the water industry leading to inconclusive results. The lack of a common methodology has been identified as an important factor contributing to divergent conclusions. This paper evaluates, for the first time, the presence of economies of scale and scope in the water industry using a flexible technology approach integrating operational and exogenous variables of the water companies in the cost functions. The empirical application carried out for the English and Welsh water industry evidenced that the inclusion of exogenous variables accounts for significant differences in economies of scale and scope. Moreover, completely different results were obtained when the economies of scale and scope were estimated using common and flexible technology methodological approaches. The findings of this study reveal the importance of using an appropriate methodology to support policy decision-making processes to promote sustainable urban water activities.

  4. Rapid growing clay coatings to reduce the fire threat of furniture.

    PubMed

    Kim, Yeon Seok; Li, Yu-Chin; Pitts, William M; Werrel, Martin; Davis, Rick D

    2014-02-12

    Layer-by-layer (LbL) assembly coatings reduce the flammability of textiles and polyurethane foam but require extensive repetitive processing steps to produce the desired coating thickness and nanoparticle fire retardant content that translates into a fire retardant coating. Reported here is a new hybrid bi-layer (BL) approach to fabricate fire retardant coatings on polyurethane foam. Utilizing hydrogen bonding and electrostatic attraction along with the pH adjustment, a fast growing coating with significant fire retardant clay content was achieved. This hybrid BL coating exhibits significant fire performance improvement in both bench scale and real scale tests. Cone calorimetry bench scale tests show a 42% and 71% reduction in peak and average heat release rates, respectively. Real scale furniture mockups constructed using the hybrid LbL coating reduced the peak and average heat release rates by 53% and 63%, respectively. This is the first time that the fire safety in a real scale test has been reported for any LbL technology. This hybrid LbL coating is the fastest approach to develop an effective fire retardant coating for polyurethane foam.

  5. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    NASA Astrophysics Data System (ADS)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  6. Fast Atomic-Scale Chemical Imaging of Crystalline Materials and Dynamic Phase Transformations

    DOE PAGES

    Lu, Ping; Yuan, Ren Liang; Ihlefeld, Jon F.; ...

    2016-03-04

    Chemical imaging at the atomic-scale provides a useful real-space approach to chemically investigate solid crystal structures, and has been recently demonstrated in aberration corrected scanning transmission electron microscopy (STEM). Atomic-scale chemical imaging by STEM using energy-dispersive X-ray spectroscopy (EDS) offers easy data interpretation with a one-to-one correspondence between image and structure but has a severe shortcoming due to the poor efficiency of X-ray generation and collection. As a result, it requires a long acquisition time of typical > few 100 seconds, limiting its potential applications. Here we describe the development of an atomic-scale STEM EDS chemical imaging technique that cutsmore » the acquisition time to one or a few seconds, efficiently reducing the acquisition time by more than 100 times. This method was demonstrated using LaAlO 3 (LAO) as a model crystal. Applying this method to the study of phase transformation induced by electron-beam radiation in a layered lithium transition-metal (TM) oxide, i.e., Li[Li 0.2Ni 0.2Mn 0.6]O 2 (LNMO), a cathode materials for lithium-ion batteries, we obtained a time-series of the atomic-scale chemical imaging, showing the transformation progressing by preferably jumping of Ni atoms from the TM layers into the Li-layers. The new capability offers an opportunity for temporal, atomic-scale chemical mapping of crystal structures for the investigation of materials susceptible to electron irradiation as well as phase transformation and dynamics at the atomic-scale.« less

  7. Modeling Bimolecular Reactive Transport With Mixing-Limitation: Theory and Application to Column Experiments

    NASA Astrophysics Data System (ADS)

    Ginn, T. R.

    2018-01-01

    The challenge of determining mixing extent of solutions undergoing advective-dispersive-diffusive transport is well known. In particular, reaction extent between displacing and displaced solutes depends on mixing at the pore scale, that is, generally smaller than continuum scale quantification that relies on dispersive fluxes. Here a novel mobile-mobile mass transfer approach is developed to distinguish diffusive mixing from dispersive spreading in one-dimensional transport involving small-scale velocity variations with some correlation, such as occurs in hydrodynamic dispersion, in which short-range ballistic transports give rise to dispersed but not mixed segregation zones, termed here ballisticules. When considering transport of a single solution, this approach distinguishes self-diffusive mixing from spreading, and in the case of displacement of one solution by another, each containing a participant reactant of an irreversible bimolecular reaction, this results in time-delayed diffusive mixing of reactants. The approach generates models for both kinetically controlled and equilibrium irreversible reaction cases, while honoring independently measured reaction rates and dispersivities. The mathematical solution for the equilibrium case is a simple analytical expression. The approach is applied to published experimental data on bimolecular reactions for homogeneous porous media under postasymptotic dispersive conditions with good results.

  8. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  9. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  11. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  12. A 100,000 Scale Factor Radar Range.

    PubMed

    Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser

    2017-12-19

    The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.

  13. Earth Observation and Indicators Pertaining to Determinants of Health- An Approach to Support Local Scale Characterization of Environmental Determinants of Vector-Borne Diseases

    NASA Astrophysics Data System (ADS)

    Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe

    2016-08-01

    Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.

  14. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas

    2009-01-01

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors,more » other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million atom biological systems scale well up to 30k cores, producing 30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.« less

  15. Scaling of Multimillion-Atom Biological Molecular Dynamics Simulation on a Petascale Supercomputer.

    PubMed

    Schulz, Roland; Lindner, Benjamin; Petridis, Loukas; Smith, Jeremy C

    2009-10-13

    A strategy is described for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers. The strategy is developed using benchmark systems of particular interest to bioenergy research, comprising models of cellulose and lignocellulosic biomass in an aqueous solution. The approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald method. With RF, three million- and five million-atom biological systems scale well up to ∼30k cores, producing ∼30 ns/day. Atomistic simulations of very large systems for time scales approaching the microsecond would, therefore, appear now to be within reach.

  16. Mean-cluster approach indicates cell sorting time scales are determined by collective dynamics

    NASA Astrophysics Data System (ADS)

    Beatrici, Carine P.; de Almeida, Rita M. C.; Brunnet, Leonardo G.

    2017-03-01

    Cell migration is essential to cell segregation, playing a central role in tissue formation, wound healing, and tumor evolution. Considering random mixtures of two cell types, it is still not clear which cell characteristics define clustering time scales. The mass of diffusing clusters merging with one another is expected to grow as td /d +2 when the diffusion constant scales with the inverse of the cluster mass. Cell segregation experiments deviate from that behavior. Explanations for that could arise from specific microscopic mechanisms or from collective effects, typical of active matter. Here we consider a power law connecting diffusion constant and cluster mass to propose an analytic approach to model cell segregation where we explicitly take into account finite-size corrections. The results are compared with active matter model simulations and experiments available in the literature. To investigate the role played by different mechanisms we considered different hypotheses describing cell-cell interaction: differential adhesion hypothesis and different velocities hypothesis. We find that the simulations yield normal diffusion for long time intervals. Analytic and simulation results show that (i) cluster evolution clearly tends to a scaling regime, disrupted only at finite-size limits; (ii) cluster diffusion is greatly enhanced by cell collective behavior, such that for high enough tendency to follow the neighbors, cluster diffusion may become independent of cluster size; (iii) the scaling exponent for cluster growth depends only on the mass-diffusion relation, not on the detailed local segregation mechanism. These results apply for active matter systems in general and, in particular, the mechanisms found underlying the increase in cell sorting speed certainly have deep implications in biological evolution as a selection mechanism.

  17. From Points to Patterns - Functional Relations between Groundwater Connectivity and Catchment-scale Streamflow Response

    NASA Astrophysics Data System (ADS)

    Rinderer, M.; McGlynn, B. L.; van Meerveld, I. H. J.

    2016-12-01

    Groundwater measurements can help us to improve our understanding of runoff generation at the catchment-scale but typically only provide point-scale data. These measurements, therefore, need to be interpolated or upscaled in order to obtain information about catchment scale groundwater dynamics. Our approach used data from 51 spatially distributed groundwater monitoring sites in a Swiss pre-alpine catchment and time series clustering to define six groundwater response clusters. Each of the clusters was characterized by distinctly different site characteristics (i.e., Topographic Wetness Index and curvature), which allowed us to assign all unmonitored locations to one of these clusters. Time series modeling and the definition of response thresholds (i.e., the depth of more transmissive soil layers) allowed us to derive maps of the spatial distribution of active (i.e., responding) locations across the catchment at 15 min time intervals. Connectivity between all active locations and the stream network was determined using a graph theory approach. The extent of the active and connected areas differed during events and suggests that not all active locations directly contributed to streamflow. Gate keeper sites prevented connectivity of upslope locations to the channel network. Streamflow dynamics at the catchment outlet were correlated to catchment average connectivity dynamics. In a sensitivity analysis we tested six different groundwater levels for a site to be considered "active", which showed that the definition of the threshold did not significantly influence the conclusions drawn from our analysis. This study is the first one to derive patterns of groundwater dynamics based on empirical data (rather than interpolation) and provides insight into the spatio-temporal evolution of the active and connected runoff source areas at the catchment-scale that is critical to understanding the dynamics of water quantity and quality in streams.

  18. The nitrate response of a lowland catchment and groundwater travel times

    NASA Astrophysics Data System (ADS)

    van der Velde, Ype; Rozemeijer, Joachim; de Rooij, Gerrit; van Geer, Frans

    2010-05-01

    Intensive agriculture in lowland catchments causes eutrophication of downstream waters. To determine effective measures to reduce the nutrient loads from upstream lowland catchments, we need to understand the origin of long-term and daily variations in surface water nutrient concentrations. Surface water concentrations are often linked to travel time distributions of water passing through the saturated and unsaturated soil of the contributing catchment. This distribution represents the contact time over which sorption, desorption and degradation takes place. However, travel time distributions are strongly influenced by processes like tube drain flow, overland flow and the dynamics of draining ditches and streams and therefore exhibit strong daily and seasonal variations. The study we will present is situated in the 6.6 km2 Hupsel brook catchment in The Netherlands. In this catchment nitrate and chloride concentrations have been intensively monitored for the past 26 years under steadily decreasing agricultural inputs. We described the complicated dynamics of subsurface water fluxes as streams, ditches and tube drains locally switch between active or passive depending on the ambient groundwater level by a groundwater model with high spatial and temporal resolutions. A transient particle tracking approach is used to derive a unique catchment-scale travel time distribution for each day during the 26 year model period. These transient travel time distributions are not smooth distributions, but distributions that are strongly spiked reflecting the contribution of past rainfall events to the current discharge. We will show that a catchment-scale mass response function approach that only describes catchment-scale mixing and degradation suffices to accurately reproduce observed chloride and nitrate surface water concentrations as long as the mass response functions include the dynamics of travel time distributions caused by the highly variable connectivity of the surface water network.

  19. Elementary dispersion analysis of some mimetic discretizations on triangular C-grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow

    2017-02-01

    Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less

  20. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  1. Evaluation of ultrasound based sterilization approaches in terms of shelf life and quality parameters of fruit and vegetable juices.

    PubMed

    Khandpur, Paramjeet; Gogate, Parag R

    2016-03-01

    The present work evaluates the performance of ultrasound based sterilization approaches for processing of different fruit and vegetable juices in terms of microbial growth and changes in the quality parameters during the storage. Comparison with the conventional thermal processing has also been presented. A novel approach based on combination of ultrasound with ultraviolet irradiation and crude extract of essential oil from orange peels has been used for the first time. Identification of the microbial growth (total bacteria and yeast content) in the juices during the subsequent storage and assessing the safety for human consumption along with the changes in the quality parameters (Brix, titratable acidity, pH, ORP, salt, conductivity, TSS and TDS) has been investigated in details. The optimized ultrasound parameters for juice sterilization were established as ultrasound power of 100 W and treatment time of 15 min for the constant frequency operation (20 kHz). It has been established that more than 5 log reduction was achieved using the novel combined approaches based on ultrasound. The treated juices using different approaches based on ultrasound also showed lower microbial growth and improved quality characteristics as compared to the thermally processed juice. Scale up studies were also performed using spinach juice as the test sample with processing at 5 L volume for the first time. The ultrasound treated juice satisfied the microbiological and physiochemical safety limits in refrigerated storage conditions for 20 days for the large scale processing. Overall the present work conclusively established the usefulness of combined treatment approaches based on ultrasound for maintaining the microbiological safety of beverages with enhanced shelf life and excellent quality parameters as compared to the untreated and thermally processed juices. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. At the Limits of Criticality-Based Quantum Metrology: Apparent Super-Heisenberg Scaling Revisited

    NASA Astrophysics Data System (ADS)

    Rams, Marek M.; Sierant, Piotr; Dutta, Omyoti; Horodecki, Paweł; Zakrzewski, Jakub

    2018-04-01

    We address the question of whether the super-Heisenberg scaling for quantum estimation is indeed realizable. We unify the results of two approaches. In the first one, the original system is compared with its copy rotated by the parameter-dependent dynamics. If the parameter is coupled to the one-body part of the Hamiltonian, the precision of its estimation is known to scale at most as N-1 (Heisenberg scaling) in terms of the number of elementary subsystems used N . The second approach compares the overlap between the ground states of the parameter-dependent Hamiltonian in critical systems, often leading to an apparent super-Heisenberg scaling. However, we point out that if one takes into account the scaling of time needed to perform the necessary operations, i.e., ensuring adiabaticity of the evolution, the Heisenberg limit given by the rotation scenario is recovered. We illustrate the general theory on a ferromagnetic Heisenberg spin chain example and show that it exhibits such super-Heisenberg scaling of ground-state fidelity around the critical value of the parameter (magnetic field) governing the one-body part of the Hamiltonian. Even an elementary estimator represented by a single-site magnetization already outperforms the Heisenberg behavior providing the N-1.5 scaling. In this case, Fisher information sets the ultimate scaling as N-1.75, which can be saturated by measuring magnetization on all sites simultaneously. We discuss universal scaling predictions of the estimation precision offered by such observables, both at zero and finite temperatures, and support them with numerical simulations in the model. We provide an experimental proposal of realization of the considered model via mapping the system to ultracold bosons in a periodically shaken optical lattice. We explicitly derive that the Heisenberg limit is recovered when the time needed for preparation of quantum states involved is taken into account.

  3. Neutron Spectrometer Prospecting During the Mojave Volatiles Project Analog Field Test

    NASA Technical Reports Server (NTRS)

    Elphic, R. C.; Heldmann, J. L.; Colaprete, A.; Hunt, D. R.; Deans, M C.; Lim, D. S.; Foil, G.; Fong, T.

    2015-01-01

    We know there are volatiles sequestered at the poles of the Moon. While we have evidence of water ice and a number of other compounds based on remote sensing, the detailed distribution, and physical and chemical form are largely unknown. Additional orbital studies of lunar polar volatiles may yield further insights, but the most important next step is to use landed assets to fully characterize the volatile composition and distribution at scales of tens to hundreds of meters. To achieve this range of scales, mobility is needed. Because of the proximity of the Moon, near real-time operation of the surface assets is possible, with an associated reduction in risk and cost. This concept of operations is very different from that of rovers on Mars, and new operational approaches are required to carry out such real-time robotic exploration. The Mojave Volatiles Project (MVP) is a Moon- Mars Analog Mission Activities (MMAMA) program effort aimed at (1) determining effective approaches to operating a real-time but short-duration lunar surface robotic mission, and (2) performing prospecting science in a natural setting, as a test of these approaches. We know there are volatiles sequestered at the poles of the Moon. While we have evidence of water ice and a number of other compounds based on remote sensing, the detailed distribution, and physical and chemical form are largely unknown. Additional orbital studies of lunar polar volatiles may yield further insights, but the most important next step is to use landed assets to fully characterize the volatile composition and distribution at scales of tens to hundreds of meters. To achieve this range of scales, mobility is needed. Because of the proximity of the Moon, near real-time operation of the surface assets is possible, with an associated reduction in risk and cost. This concept of operations is very different from that of rovers on Mars, and new operational approaches are required to carry out such robotic exploration. The Mojave Volatiles Project (MVP) is a Moon- Mars Analog Mission Activities (MMAMA) program effort aimed at (1) determining effective approaches to operating a real-time but short-duration lunar surface robotic mission, and (2) performing prospecting science in a natural setting, as a test of these approaches. Here we describe some results from the first such test, carried out in the Mojave Desert between 16 and 24 October, 2014. The test site was an alluvial fan just E of the Soda Mountains, SW of Baker, California. This site contains desert pavements, ranging from the late Pleistocene to early-Holocene in age. These pavements are undergoing dissection by the ongoing development of washes. A principal objective was to determine the hydration state of different types of desert pavement and bare ground features. The mobility element of the test was provided by the KREX-2 rover, designed and operated by the Intelligent Robotics Group at NASA Ames Research Center.

  4. Posterior Bilateral Intermuscular Approach for Upper Cervical Spine Injuries.

    PubMed

    Xu, Yong; Xiong, Wei; Han, Sung I I; Fang, Zhong; Li, Feng

    2017-08-01

    To investigate a novel intermuscular surgical approach for posterior upper cervical spine fixation. Twenty-three healthy volunteers underwent magnetic resonance imaging. By using the magnetic resonance imaging scans in transverse view at the level of lower edge of atlas, the distances from the posterior midline to lateral margin of trapezius, to the medial margin of splenius capitis, and to middle line of semispinalis capitis were recorded. The angle between posterior middle line and the line crossing the lateral margin of trapezius and middle point of ipsilateral pedicles. From October 2009 to May 2013, 12 patients with upper cervical spine injuries were operated via the bilateral intermuscular approach. The time required for surgery, blood loss, and pre- and postoperative visual analogue scale scores were analyzed. The average distance of 0-T was 39.2 ± 7.5 mm, the angle between the approach and posterior middle line was 33.2 ± 8.4°. The surgical time was 78.3 ± 22.5 minutes (45-140 minutes), and the mean intraoperative blood loss was 87.5 ± 44.2 mL (30-200 mL). Preoperative and postoperative visual analogue scale scores were 6.4 ± 0.8 and 1.8 ± 0.7, respectively. The average follow-up time was 19.7 ± 11.5 months (9-48 months). The posterior bilateral intermuscular approach for upper cervical spine injuries is a valid alternative for Hangmans' fractures type I, type II, and type Ia according to Levine and Edwards classification as well as atlantoaxial subluxation caused by upper cervical spine trauma. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. High resolution fossil fuel combustion CO2 emission fluxes for the United States.

    PubMed

    Gurney, Kevin R; Mendoza, Daniel L; Zhou, Yuyu; Fischer, Marc L; Miller, Chris C; Geethakumar, Sarath; de la Rue du Can, Stephane

    2009-07-15

    Quantification of fossil fuel CO2 emissions at fine space and time resolution is emerging as a critical need in carbon cycle and climate change research. As atmospheric CO2 measurements expand with the advent of a dedicated remote sensing platform and denser in situ measurements, the ability to close the carbon budget at spatial scales of approximately 100 km2 and daily time scales requires fossil fuel CO2 inventories at commensurate resolution. Additionally, the growing interest in U.S. climate change policy measures are best served by emissions that are tied to the driving processes in space and time. Here we introduce a high resolution data product (the "Vulcan" inventory: www.purdue.edu/eas/carbon/vulcan/) that has quantified fossil fuel CO2 emissions for the contiguous U.S. at spatial scales less than 100 km2 and temporal scales as small as hours. This data product completed for the year 2002, includes detail on combustion technology and 48 fuel types through all sectors of the U.S. economy. The Vulcan inventory is built from the decades of local/regional air pollution monitoring and complements these data with census, traffic, and digital road data sets. The Vulcan inventory shows excellent agreement with national-level Department of Energy inventories, despite the different approach taken by the DOE to quantify U.S. fossil fuel CO2 emissions. Comparison to the global 1degree x 1 degree fossil fuel CO2 inventory, used widely by the carbon cycle and climate change community prior to the construction of the Vulcan inventory, highlights the space/time biases inherent in the population-based approach.

  6. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  7. Scalable Performance Measurement and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less

  8. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion

    PubMed Central

    Gautestad, Arild O.

    2012-01-01

    Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the ‘power law in disguise’ paradox—from a composite Brownian motion consisting of a superposition of independent movement processes at different scales—may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated. PMID:22456456

  9. Using lumped modelling for providing simple metrics and associated uncertainties of catchment response to agricultural-derived nitrates pollutions

    NASA Astrophysics Data System (ADS)

    RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.

    2013-12-01

    The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.

  10. Reducing Error Bars through the Intercalibration of Radioisotopic and Astrochronologic Time Scales for the Cenomanian/Turonian Boundary Interval, Western Interior Basin, USA

    NASA Astrophysics Data System (ADS)

    Meyers, S. R.; Siewert, S. E.; Singer, B. S.; Sageman, B. B.; Condon, D. J.; Obradovich, J. D.; Jicha, B.; Sawyer, D. A.

    2010-12-01

    We develop a new intercalibrated astrochronologic and radioisotopic time scale for the Cenomanian/Turonian (C/T) boundary interval near the GSSP in Colorado, where orbitally-influenced rhythmic strata host bentonites that contain sanidine and zircon suitable for 40Ar/39Ar and U-Pb dating. This provides a rare opportunity to directly intercalibrate two independent radioisotopic chronometers against an astrochronologic age model. We present paired 40Ar/39Ar and U-Pb ages from four bentonites spanning the Vascoceras diartianum to Pseudaspidoceras flexuosum biozones, utilizing both newly collected material and legacy sanidine samples of Obradovich (1993). Full 2σ uncertainties (decay constant, standard age, analytical sources) for the 40Ar/39Ar ages, using a weighted mean of 33-103 concordant age determinations and an age of 28.201 Ma for Fish Canyon sanidine (FCs), range from ±0.15 to 0.19 Ma, with ages from 93.67 to 94.43 Ma. The traditional FCs age of 28.02 Ma yields ages from 93.04 to 93.78 Ma with full uncertainties of ±1.58 Ma. Using the ET535 tracer, single zircon CA-TIMS 206Pb/238U ages determined from each bentonite record a range of ages (up to 2.1 Ma), however, in three of the four bentonites the youngest single crystal ages are statistically indistinguishable from the 40Ar/39Ar ages calculated relative to 28.201 Ma FCs, supporting this calibration. Using the new radioisotopic data and published astrochronology (Sageman et al., 2006) we develop an integrated C/T boundary time scale using a Bayesian statistical approach that builds upon the strength of each geochronologic method. Whereas the radioisotopic data provide an age with a well-defined uncertainty for each bentonite, the orbital time scale yields a more highly resolved estimate of the duration between stratigraphic horizons, including the radioisotopically dated beds. The Bayesian algorithm yields a C/T time scale that is statistically compatible with the astrochronologic and radioisotopic data, but with smaller uncertainty than either method could achieve alone. The results firmly anchor the floating orbital time scale and yield astronomically-recalibrated radioisotopic ages with full uncertainties that approach the EARTHTIME goal of permil resolution.

  11. Confinement time exceeding one second for a toroidal electron plasma.

    PubMed

    Marler, J P; Stoneking, M R

    2008-04-18

    Nearly steady-state electron plasmas are trapped in a toroidal magnetic field for the first time. We report the first results from a new toroidal electron plasma experiment, the Lawrence Non-neutral Torus II, in which electron densities on the order of 10(7) cm(-3) are trapped in a 270-degree toroidal arc (670 G toroidal magnetic field) by application of trapping potentials to segments of a conducting shell. The total charge inferred from measurements of the frequency of the m=1 diocotron mode is observed to decay on a 3 s time scale, a time scale that approaches the predicted limit due to magnetic pumping transport. Three seconds represents approximately equal to 10(5) periods of the lowest frequency plasma mode, indicating that nearly steady-state conditions are achieved.

  12. Conservation of northern bobwhite on private lands in Georgia, USA under uncertainty about landscape-level habitat effects

    USGS Publications Warehouse

    Howell, J.E.; Moore, C.T.; Conroy, M.J.; Hamrick, R.G.; Cooper, R.J.; Thackston, R.E.; Carroll, J.P.

    2009-01-01

    Large-scale habitat enhancement programs for birds are becoming more widespread, however, most lack monitoring to resolve uncertainties and enhance program impact over time. Georgia?s Bobwhite Quail Initiative (BQI) is a competitive, proposal-based system that provides incentives to landowners to establish habitat for northern bobwhites (Colinus virginianus). Using data from monitoring conducted in the program?s first years (1999?2001), we developed alternative hierarchical models to predict bobwhite abundance in response to program habitat modifications on local and regional scales. Effects of habitat and habitat management on bobwhite population response varied among geographical scales, but high measurement variability rendered the specific nature of these scaled effects equivocal. Under some models, BQI had positive impact at both local farm scales (1, 9 km2), particularly when practice acres were clustered, whereas other credible models indicated that bird response did not depend on spatial arrangement of practices. Thus, uncertainty about landscape-level effects of management presents a challenge to program managers who must decide which proposals to accept. We demonstrate that optimal selection decisions can be made despite this uncertainty and that uncertainty can be reduced over time, with consequent improvement in management efficacy. However, such an adaptive approach to BQI program implementation would require the reestablishment of monitoring of bobwhite abundance, an effort for which funding was discontinued in 2002. For landscape-level conservation programs generally, our approach demonstrates the value in assessing multiple scales of impact of habitat modification programs, and it reveals the utility of addressing management uncertainty through multiple decision models and system monitoring.

  13. GIS-Based Sub-Basin Scale Identification of Dominant Runoff Processes for Soil and Water Management in Anambra Area of Nigeria

    NASA Astrophysics Data System (ADS)

    Fagbohun, Babatunde Joseph; Olabode, Oluwaseun Franklin; Adebola, Abiodun Olufemi; Akinluyi, Francis Omowonuola

    2017-12-01

    Identifying landscapes having comparable hydrological characteristics is valuable for the determination of dominant runoff process (DRP) and prediction of flood. Several approaches used for DRP-mapping vary in relation to data and time requirement. Manual approaches which are based on field investigation and expert knowledge are time demanding and difficult to implement at regional scale. Automatic GIS-based approach on the other hand require simplification of data but is easier to implement and it is applicable on a regional scale. In this study, GIS-based automated approach was used to identify the DRPs in Anambra area. The result showed that Hortonian overland flow (HOF) has the highest coverage of 1508.3 km2 (33.5%) followed by deep percolation (DP) with coverage of 1455.3 km2 (32.3%). Subsurface flow (SSF) is the third dominant runoff process covering 920.6 km2 (20.4%) while saturated overland flow (SOF) covers the least area of 618.4 km2 (13.7%) of the study area. The result reveal that considerable amount of precipitated water would be infiltrated into the subsurface through deep percolation process contributing to groundwater recharge in the study area. However, it is envisaged that HOF and SOF will continue to increase due to the continuous expansion of built-up area. With the expected increase in HOF and SOF, and the change in rainfall pattern associated with perpetual problem of climate change, it is paramount that groundwater conservation practices should be considered to ensure continued sustainable utilization of groundwater in the study area.

  14. A Multi-scale Cognitive Approach to Intrusion Detection and Response

    DTIC Science & Technology

    2015-12-28

    the behavior of the traffic on the network, either by using mathematical formulas or by replaying packet streams. As a result, simulators depend...large scale. Summary of the most important results We obtained a powerful machine, which has 768 cores and 1.25 TB memory . RBG has been...time. Each client is configured with 1GB memory , 10 GB disk space, and one 100M Ethernet interface. The server nodes include web servers

  15. Well-being measurement and the WHO health policy Health 2010: systematic review of measurement scales.

    PubMed

    Lindert, Jutta; Bain, Paul A; Kubzansky, Laura D; Stein, Claudia

    2015-08-01

    Subjective well-being (SWB) contributes to health and mental health. It is a major objective of the new World Health Organization health policy framework, 'Health 2020'. Various approaches to defining and measuring well-being exist. We aimed to identify, map and analyse the contents of self-reported well-being measurement scales for use with individuals more than 15 years of age to help researchers and politicians choose appropriate measurement tools. We conducted a systematic literature search in PubMed for studies published between 2007 and 2012, with additional hand-searching, to identify empirical studies that investigated well-being using a measurement scale. For each eligible study, we identified the measurement tool and reviewed its components, number of items, administration time, validity, reliability, responsiveness and sensitivity. The literature review identified 60 unique measurement scales. Measurement scales were either multidimensional (n = 33) or unidimensional (n = 14) and assessed multiple domains. The most frequently encountered domains were affects (39 scales), social relations (17 scales), life satisfaction (13 scales), physical health (13 scales), meaning/achievement (9 scales) and spirituality (6 scales). The scales included between 1 and 100 items; the administration time varied from 1 to 15 min. Well-being is a higher order construct. Measures seldom reported testing for gender or cultural sensitivity. The content and format of scales varied considerably. Effective monitoring and comparison of SWB over time and across geographic regions will require further work to refine definitions of SWB. We recommend concurrent evaluation of at least three self-reported SWB measurement scales, including evaluation for gender or cultural sensitivity. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  16. Multi-approaches analysis reveals local adaptation in the emmer wheat (Triticum dicoccoides) at macro- but not micro-geographical scale.

    PubMed

    Volis, Sergei; Ormanbekova, Danara; Yermekbayev, Kanat; Song, Minshu; Shulgina, Irina

    2015-01-01

    Detecting local adaptation and its spatial scale is one of the most important questions of evolutionary biology. However, recognition of the effect of local selection can be challenging when there is considerable environmental variation across the distance at the whole species range. We analyzed patterns of local adaptation in emmer wheat, Triticum dicoccoides, at two spatial scales, small (inter-population distance less than one km) and large (inter-population distance more than 50 km) using several approaches. Plants originating from four distinct habitats at two geographic scales (cold edge, arid edge and two topographically dissimilar core locations) were reciprocally transplanted and their success over time was measured as 1) lifetime fitness in a year of planting, and 2) population growth four years after planting. In addition, we analyzed molecular (SSR) and quantitative trait variation and calculated the QST/FST ratio. No home advantage was detected at the small spatial scale. At the large spatial scale, home advantage was detected for the core population and the cold edge population in the year of introduction via measuring life-time plant performance. However, superior performance of the arid edge population in its own environment was evident only after several generations via measuring experimental population growth rate through genotyping with SSRs allowing counting the number of plants and seeds per introduced genotype per site. These results highlight the importance of multi-generation surveys of population growth rate in local adaptation testing. Despite predominant self-fertilization of T. dicoccoides and the associated high degree of structuring of genetic variation, the results of the QST - FST comparison were in general agreement with the pattern of local adaptation at the two spatial scales detected by reciprocal transplanting.

  17. Towards understanding temporal and spatial dynamics of seagrass landscapes using time-series remote sensing

    NASA Astrophysics Data System (ADS)

    Lyons, Mitchell B.; Roelfsema, Chris M.; Phinn, Stuart R.

    2013-03-01

    The spatial and temporal dynamics of seagrasses have been well studied at the leaf to patch scales, however, the link to large spatial extent landscape and population dynamics is still unresolved in seagrass ecology. Traditional remote sensing approaches have lacked the temporal resolution and consistency to appropriately address this issue. This study uses two high temporal resolution time-series of thematic seagrass cover maps to examine the spatial and temporal dynamics of seagrass at both an inter- and intra-annual time scales, one of the first globally to do so at this scale. Previous work by the authors developed an object-based approach to map seagrass cover level distribution from a long term archive of Landsat TM and ETM+ images on the Eastern Banks (≈200 km2), Moreton Bay, Australia. In this work a range of trend and time-series analysis methods are demonstrated for a time-series of 23 annual maps from 1988 to 2010 and a time-series of 16 monthly maps during 2008-2010. Significant new insight was presented regarding the inter- and intra-annual dynamics of seagrass persistence over time, seagrass cover level variability, seagrass cover level trajectory, and change in area of seagrass and cover levels over time. Overall we found that there was no significant decline in total seagrass area on the Eastern Banks, but there was a significant decline in seagrass cover level condition. A case study of two smaller communities within the Eastern Banks that experienced a decline in both overall seagrass area and condition are examined in detail, highlighting possible differences in environmental and process drivers. We demonstrate how trend and time-series analysis enabled seagrass distribution to be appropriately assessed in context of its spatial and temporal history and provides the ability to not only quantify change, but also describe the type of change. We also demonstrate the potential use of time-series analysis products to investigate seagrass growth and decline as well as the processes that drive it. This study demonstrates clear benefits over traditional seagrass mapping and monitoring approaches, and provides a proof of concept for the use of trend and time-series analysis of remotely sensed seagrass products to benefit current endeavours in seagrass ecology.

  18. Climate Information Responding to User Needs (CIRUN)

    NASA Astrophysics Data System (ADS)

    Busalacchi, A. J.

    2009-05-01

    For the past several decades many different US agencies have been involved in collecting Earth observations, e.g., NASA, NOAA, DoD, USGS, USDA. More recently, the US has led the international effort to design a Global Earth Observation System of Systems (GEOSS). Yet, there has been little substantive progress at the synthesis and integration of the various research and operational, space-based and in situ, observations. Similarly, access to such a range of observations across the atmosphere, ocean, and land surface remains fragmented. With respect to prediction of the Earth System, the US has not developed a comprehensive strategy. For climate, the US (e.g., NOAA, NASA, DoE) has taken a two-track strategy. At the more immediate time scale, coupled ocean-atmosphere models of the physical climate system have built upon the tradition of daily numerical weather prediction in order to extend the forecast window to seasonal to interannual times scales. At the century time scale, the nascent development of Earth System models, combining components of the physical climate system with biogeochemical cycles, are being used to provide future climate change projections in response to anticipated greenhouse gas forcings. Between these to two approaches to prediction lies a key deficiency of interest to decision makers, especially as it pertains to adaptation, i.e., deterministic prediction of the Earth System at time scales from days to decades with spatial scales from global to regional. One of many obstacles to be overcome is the design of present day observation and prediction products based on user needs. To date, most of such products have evolved from the technology and research "push" rather than the user or stakeholder "pull". In the future as planning proceeds for a national climate service, emphasis must be given to a more coordinated approach in which stakeholders' needs help design future Earth System observational and prediction products, and similarly, such products need to be tailored to provide decision support.

  19. Terpenes tell different tales at different scales: glimpses into the Chemical Ecology of conifer - bark beetle - microbial interactions.

    PubMed

    Raffa, Kenneth F

    2014-01-01

    Chemical signaling mediates nearly all aspects of species interactions. Our knowledge of these signals has progressed dramatically, and now includes good characterizations of the bioactivities, modes of action, biosynthesis, and genetic programming of numerous compounds affecting a wide range of species. A major challenge now is to integrate this information so as to better understand actual selective pressures under natural conditions, make meaningful predictions about how organisms and ecosystems will respond to a changing environment, and provide useful guidance to managers who must contend with difficult trade-offs among competing socioeconomic values. One approach is to place stronger emphasis on cross-scale interactions, an understanding of which can help us better connect pattern with process, and improve our ability to make mechanistically grounded predictions over large areas and time frames. The opportunity to achieve such progress has been heightened by the rapid development of new scientific and technological tools. There are significant difficulties, however: Attempts to extend arrays of lower-scale processes into higher scale functioning can generate overly diffuse patterns. Conversely, attempts to infer process from pattern can miss critically important lower-scale drivers in systems where their biological and statistical significance is negated after critical thresholds are breached. Chemical signaling in bark beetle - conifer interactions has been explored for several decades, including by the two pioneers after whom this award is named. The strong knowledge base developed by many researchers, the importance of bark beetles in ecosystem functioning, and the socioeconomic challenges they pose, establish these insects as an ideal model for studying chemical signaling within a cross-scale context. This report describes our recent work at three levels of scale: interactions of bacteria with host plant compounds and symbiotic fungi (tree level, biochemical time), relationships among inducible and constitutive defenses, population dynamics, and plastic host-selection behavior (stand level, ecological time), and climate-driven range expansion of a native eruptive species into semi-naïve and potentially naïve habitats (geographical level, evolutionary time). I approach this problem by focusing primarily on one chemical group, terpenes, by emphasizing the curvilinear and threshold-structured basis of most underlying relationships, and by focusing on the system's feedback structure, which can either buffer or amplify relationships across scales.

  20. Aerothermodynamic Design Sensitivities for a Reacting Gas Flow Solver on an Unstructured Mesh Using a Discrete Adjoint Formulation

    NASA Astrophysics Data System (ADS)

    Thompson, Kyle Bonner

    An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.

  1. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  2. A Comprehensive Analysis of Multiscale Field-Aligned Currents: Characteristics, Controlling Parameters, and Relationships

    NASA Astrophysics Data System (ADS)

    McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin

    2017-12-01

    We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.

  3. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu

    2014-01-21

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less

  4. Multiscale modeling and general theory of non-equilibrium plasma-assisted ignition and combustion

    NASA Astrophysics Data System (ADS)

    Yang, Suo; Nagaraja, Sharath; Sun, Wenting; Yang, Vigor

    2017-11-01

    A self-consistent framework for modeling and simulations of plasma-assisted ignition and combustion is established. In this framework, a ‘frozen electric field’ modeling approach is applied to take advantage of the quasi-periodic behaviors of the electrical characteristics to avoid the re-calculation of electric field for each pulse. The correlated dynamic adaptive chemistry (CO-DAC) method is employed to accelerate the calculation of large and stiff chemical mechanisms. The time-step is dynamically updated during the simulation through a three-stage multi-time scale modeling strategy, which utilizes the large separation of time scales in nanosecond pulsed plasma discharges. A general theory of plasma-assisted ignition and combustion is then proposed. Nanosecond pulsed plasma discharges for ignition and combustion can be divided into four stages. Stage I is the discharge pulse, with time scales of O (1-10 ns). In this stage, input energy is coupled into electron impact excitation and dissociation reactions to generate charged/excited species and radicals. Stage II is the afterglow during the gap between two adjacent pulses, with time scales of O (1 0 0 ns). In this stage, quenching of excited species dissociates O2 and fuel molecules, and provides fast gas heating. Stage III is the remaining gap between pulses, with time scales of O (1-100 µs). The radicals generated during Stages I and II significantly enhance exothermic reactions in this stage. The cumulative effects of multiple pulses is seen in Stage IV, with time scales of O (1-1000 ms), which include preheated gas temperatures and a large pool of radicals and fuel fragments to trigger ignition. For flames, plasma could significantly enhance the radical generation and gas heating in the pre-heat zone, thereby enhancing the flame establishment.

  5. An individual-based model of skipjack tuna (Katsuwonus pelamis) movement in the tropical Pacific ocean

    NASA Astrophysics Data System (ADS)

    Scutt Phillips, Joe; Sen Gupta, Alex; Senina, Inna; van Sebille, Erik; Lange, Michael; Lehodey, Patrick; Hampton, John; Nicol, Simon

    2018-05-01

    The distribution of marine species is often modeled using Eulerian approaches, in which changes to population density or abundance are calculated at fixed locations in space. Conversely, Lagrangian, or individual-based, models simulate the movement of individual particles moving in continuous space, with broader-scale patterns such as distribution being an emergent property of many, potentially adaptive, individuals. These models offer advantages in examining dynamics across spatiotemporal scales and making comparisons with observations from individual-scale data. Here, we introduce and describe such a model, the Individual-based Kinesis, Advection and Movement of Ocean ANimAls model (Ikamoana), which we use to replicate the movement processes of an existing Eulerian model for marine predators (the Spatial Ecosystem and Population Dynamics Model, SEAPODYM). Ikamoana simulates the movement of either individual or groups of animals by physical ocean currents, habitat-dependent stochastic movements (kinesis), and taxis movements representing active searching behaviours. Applying our model to Pacific skipjack tuna (Katsuwonus pelamis), we show that it accurately replicates the evolution of density distribution simulated by SEAPODYM with low time-mean error and a spatial correlation of density that exceeds 0.96 at all times. We demonstrate how the Lagrangian approach permits easy tracking of individuals' trajectories for examining connectivity between different regions, and show how the model can provide independent estimates of transfer rates between commonly used assessment regions. In particular, we find that retention rates in most assessment regions are considerably smaller (up to a factor of 2) than those estimated by this population of skipjack's primary assessment model. Moreover, these rates are sensitive to ocean state (e.g. El Nino vs La Nina) and so assuming fixed transfer rates between regions may lead to spurious stock estimates. A novel feature of the Lagrangian approach is that individual schools can be tracked through time, and we demonstrate that movement between two assessment regions at broad temporal scales includes extended transits through other regions at finer-scales. Finally, we discuss the utility of this modeling framework for the management of marine reserves, designing effective monitoring programmes, and exploring hypotheses regarding the behaviour of hard-to-observe oceanic animals.

  6. Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)

    NASA Astrophysics Data System (ADS)

    Jerolmack, D. J.

    2010-12-01

    The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic transport laws. Flooding on an experimental alluvial fan. Intensity is related to the cumulative amount of time flow has visited an area of the fan over the experiment. Dark areas represent an emergent channel network resulting from stochastic migration of river channels.

  7. The stability and change of etiological influences on depression, anxiety symptoms and their co-occurrence across adolescence and young adulthood.

    PubMed

    Waszczuk, M A; Zavos, H M S; Gregory, A M; Eley, T C

    2016-01-01

    Depression and anxiety persist within and across diagnostic boundaries. The manner in which common v. disorder-specific genetic and environmental influences operate across development to maintain internalizing disorders and their co-morbidity is unclear. This paper investigates the stability and change of etiological influences on depression, panic, generalized, separation and social anxiety symptoms, and their co-occurrence, across adolescence and young adulthood. A total of 2619 twins/siblings prospectively reported symptoms of depression and anxiety at mean ages 15, 17 and 20 years. Each symptom scale showed a similar pattern of moderate continuity across development, largely underpinned by genetic stability. New genetic influences contributing to change in the developmental course of the symptoms emerged at each time point. All symptom scales correlated moderately with one another over time. Genetic influences, both stable and time-specific, overlapped considerably between the scales. Non-shared environmental influences were largely time- and symptom-specific, but some contributed moderately to the stability of depression and anxiety symptom scales. These stable, longitudinal environmental influences were highly correlated between the symptoms. The results highlight both stable and dynamic etiology of depression and anxiety symptom scales. They provide preliminary evidence that stable as well as newly emerging genes contribute to the co-morbidity between depression and anxiety across adolescence and young adulthood. Conversely, environmental influences are largely time-specific and contribute to change in symptoms over time. The results inform molecular genetics research and transdiagnostic treatment and prevention approaches.

  8. Statistical physics approach to earthquake occurrence and forecasting

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio

    2016-04-01

    There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different levels of prediction. In this review we also briefly discuss how the statistical mechanics approach can be applied to non-tectonic earthquakes and to other natural stochastic processes, such as volcanic eruptions and solar flares.

  9. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  10. Critical scales to explain urban hydrological response: an application in Cranbrook, London

    NASA Astrophysics Data System (ADS)

    Cristiano, Elena; ten Veldhuis, Marie-Claire; Gaitan, Santiago; Ochoa Rodriguez, Susana; van de Giesen, Nick

    2018-04-01

    Rainfall variability in space and time, in relation to catchment characteristics and model complexity, plays an important role in explaining the sensitivity of hydrological response in urban areas. In this work we present a new approach to classify rainfall variability in space and time and we use this classification to investigate rainfall aggregation effects on urban hydrological response. Nine rainfall events, measured with a dual polarimetric X-Band radar instrument at the CAESAR site (Cabauw Experimental Site for Atmospheric Research, NL), were aggregated in time and space in order to obtain different resolution combinations. The aim of this work was to investigate the influence that rainfall and catchment scales have on hydrological response in urban areas. Three dimensionless scaling factors were introduced to investigate the interactions between rainfall and catchment scale and rainfall input resolution in relation to the performance of the model. Results showed that (1) rainfall classification based on cluster identification well represents the storm core, (2) aggregation effects are stronger for rainfall than flow, (3) model complexity does not have a strong influence compared to catchment and rainfall scales for this case study, and (4) scaling factors allow the adequate rainfall resolution to be selected to obtain a given level of accuracy in the calculation of hydrological response.

  11. The Effect of Inappropriate Calibration: Three Case Studies in Molecular Ecology

    PubMed Central

    Ho, Simon Y. W.; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-01-01

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events. PMID:18286172

  12. The effect of inappropriate calibration: three case studies in molecular ecology.

    PubMed

    Ho, Simon Y W; Saarma, Urmas; Barnett, Ross; Haile, James; Shapiro, Beth

    2008-02-20

    Time-scales estimated from sequence data play an important role in molecular ecology. They can be used to draw correlations between evolutionary and palaeoclimatic events, to measure the tempo of speciation, and to study the demographic history of an endangered species. In all of these studies, it is paramount to have accurate estimates of time-scales and substitution rates. Molecular ecological studies typically focus on intraspecific data that have evolved on genealogical scales, but often these studies inappropriately employ deep fossil calibrations or canonical substitution rates (e.g., 1% per million years for birds and mammals) for calibrating estimates of divergence times. These approaches can yield misleading estimates of molecular time-scales, with significant impacts on subsequent evolutionary and ecological inferences. We illustrate this calibration problem using three case studies: avian speciation in the late Pleistocene, the demographic history of bowhead whales, and the Pleistocene biogeography of brown bears. For each data set, we compare the date estimates that are obtained using internal and external calibration points. In all three cases, the conclusions are significantly altered by the application of revised, internally-calibrated substitution rates. Collectively, the results emphasise the importance of judicious selection of calibrations for analyses of recent evolutionary events.

  13. Assessing age stereotypes in the German population in 1996 and 2011: socio-demographic correlates and shift over time.

    PubMed

    Spangenberg, Lena; Zenger, Markus; Glaesmer, Heide; Brähler, Elmar; Strauss, Bernhard

    2018-03-01

    The present study aimed to extend the knowledge regarding dimensionality, socio-demographic correlates and shifts in age stereotypes over the past 15 years using a time-sequential design. In 1996 and 2011, we assessed age stereotypes in two independent samples of the German population aged ≥ 45 years ( N  = 970 in sample 1, N  = 1545 in sample 2). Three scales with six items each were assessed. Two scales cover negative (i.e., rigidity/isolation, burden), and one scale covers positive age stereotypes (wisdom/experience). Dimensionality of the scale, associations with socio-demographic variables and whether the stereotypes have shifted were tested using confirmatory factor analyses, structural equation modeling and analyses of variances. Three dimensions were identified and replicated following an exploratory as well as a confirmatory approach. Age stereotypes did shift between 1996 and 2011 in the dimension burden (i.e., becoming more negative). Our results further underpin the finding that age stereotypes are multifaceted and suggest that dimensions do not change over time. Additionally, our data provide some evidence that societal age stereotypes partly change over time.

  14. Increasing the power of accelerated molecular dynamics methods and plans to exploit the coming exascale

    NASA Astrophysics Data System (ADS)

    Voter, Arthur

    Many important materials processes take place on time scales that far exceed the roughly one microsecond accessible to molecular dynamics simulation. Typically, this long-time evolution is characterized by a succession of thermally activated infrequent events involving defects in the material. In the accelerated molecular dynamics (AMD) methodology, known characteristics of infrequent-event systems are exploited to make reactive events take place more frequently, in a dynamically correct way. For certain processes, this approach has been remarkably successful, offering a view of complex dynamical evolution on time scales of microseconds, milliseconds, and sometimes beyond. We have recently made advances in all three of the basic AMD methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics (TAD)), exploiting both algorithmic advances and novel parallelization approaches. I will describe these advances, present some examples of our latest results, and discuss what should be possible when exascale computing arrives in roughly five years. Funded by the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, and by the Los Alamos Laboratory Directed Research and Development program.

  15. Force-Induced Rupture of a DNA Duplex: From Fundamentals to Force Sensors.

    PubMed

    Mosayebi, Majid; Louis, Ard A; Doye, Jonathan P K; Ouldridge, Thomas E

    2015-12-22

    The rupture of double-stranded DNA under stress is a key process in biophysics and nanotechnology. In this article, we consider the shear-induced rupture of short DNA duplexes, a system that has been given new importance by recently designed force sensors and nanotechnological devices. We argue that rupture must be understood as an activated process, where the duplex state is metastable and the strands will separate in a finite time that depends on the duplex length and the force applied. Thus, the critical shearing force required to rupture a duplex depends strongly on the time scale of observation. We use simple models of DNA to show that this approach naturally captures the observed dependence of the force required to rupture a duplex within a given time on duplex length. In particular, this critical force is zero for the shortest duplexes, before rising sharply and then plateauing in the long length limit. The prevailing approach, based on identifying when the presence of each additional base pair within the duplex is thermodynamically unfavorable rather than allowing for metastability, does not predict a time-scale-dependent critical force and does not naturally incorporate a critical force of zero for the shortest duplexes. We demonstrate that our findings have important consequences for the behavior of a new force-sensing nanodevice, which operates in a mixed mode that interpolates between shearing and unzipping. At a fixed time scale and duplex length, the critical force exhibits a sigmoidal dependence on the fraction of the duplex that is subject to shearing.

  16. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm.

    PubMed

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function.

  17. Multiscale spatial and temporal estimation of the b-value

    NASA Astrophysics Data System (ADS)

    García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.

    2017-12-01

    The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.

  18. Meta-Heuristics in Short Scale Construction: Ant Colony Optimization and Genetic Algorithm

    PubMed Central

    Schroeders, Ulrich; Wilhelm, Oliver; Olaru, Gabriel

    2016-01-01

    The advent of large-scale assessment, but also the more frequent use of longitudinal and multivariate approaches to measurement in psychological, educational, and sociological research, caused an increased demand for psychometrically sound short scales. Shortening scales economizes on valuable administration time, but might result in inadequate measures because reducing an item set could: a) change the internal structure of the measure, b) result in poorer reliability and measurement precision, c) deliver measures that cannot effectively discriminate between persons on the intended ability spectrum, and d) reduce test-criterion relations. Different approaches to abbreviate measures fare differently with respect to the above-mentioned problems. Therefore, we compare the quality and efficiency of three item selection strategies to derive short scales from an existing long version: a Stepwise COnfirmatory Factor Analytical approach (SCOFA) that maximizes factor loadings and two metaheuristics, specifically an Ant Colony Optimization (ACO) with a tailored user-defined optimization function and a Genetic Algorithm (GA) with an unspecific cost-reduction function. SCOFA compiled short versions were highly reliable, but had poor validity. In contrast, both metaheuristics outperformed SCOFA and produced efficient and psychometrically sound short versions (unidimensional, reliable, sensitive, and valid). We discuss under which circumstances ACO and GA produce equivalent results and provide recommendations for conditions in which it is advisable to use a metaheuristic with an unspecific out-of-the-box optimization function. PMID:27893845

  19. From medium heterogeneity to flow and transport: A time-domain random walk approach

    NASA Astrophysics Data System (ADS)

    Hakoun, V.; Comolli, A.; Dentz, M.

    2017-12-01

    The prediction of flow and transport processes in heterogeneous porous media is based on the qualitative and quantitative understanding of the interplay between 1) spatial variability of hydraulic conductivity, 2) groundwater flow and 3) solute transport. Using a stochastic modeling approach, we study this interplay through direct numerical simulations of Darcy flow and advective transport in heterogeneous media. First, we study flow in correlated hydraulic permeability fields and shed light on the relationship between the statistics of log-hydraulic conductivity, a medium attribute, and the flow statistics. Second, we determine relationships between Eulerian and Lagrangian velocity statistics, this means, between flow and transport attributes. We show how Lagrangian statistics and thus transport behaviors such as late particle arrival times are influenced by the medium heterogeneity on one hand and the initial particle velocities on the other. We find that equidistantly sampled Lagrangian velocities can be described by a Markov process that evolves on the characteristic heterogeneity length scale. We employ a stochastic relaxation model for the equidistantly sampled particle velocities, which is parametrized by the velocity correlation length. This description results in a time-domain random walk model for the particle motion, whose spatial transitions are characterized by the velocity correlation length and temporal transitions by the particle velocities. This approach relates the statistical medium and flow properties to large scale transport, and allows for conditioning on the initial particle velocities and thus to the medium properties in the injection region. The approach is tested against direct numerical simulations.

  20. The extent to which path-integral models account for evanescent (tunneling) and complex (near-field) waves

    NASA Astrophysics Data System (ADS)

    Ranfagni, Anedio; Mugnai, Daniela; Cacciari, Ilaria

    2018-05-01

    The usefulness of a stochastic approach in determining time scales in tunneling processes (mainly, but not only, in the microwave range) is reconsidered and compared with a different approach to these kinds of processes, based on Feynman's transition elements. This latter method is found to be particularly suitable for interpreting situations in the near field, as results from some experimental cases considered here.

  1. A Multi-Scale, Integrated Approach to Representing Watershed Systems

    NASA Astrophysics Data System (ADS)

    Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos

    2014-05-01

    Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.

  2. Scaling characteristics of mountainous river flow fluctuations determined using a shallow-water acoustic tomography system

    NASA Astrophysics Data System (ADS)

    Al Sawaf, Mohamad Basel; Kawanisi, Kiyosi; Kagami, Junya; Bahreinimotlagh, Masoud; Danial, Mochammad Meddy

    2017-10-01

    The aim of this study is to investigate the scaling exponent properties of mountainous river flow fluctuations by detrended fluctuation analysis (DFA). Streamflow data were collected continuously using Fluvial Acoustic Tomography System (FATS), which is a novel system for measuring continuous streamflow at high-frequency scales. The results revealed that river discharge fluctuations have two scaling regimes and scaling break. In contrast to the Ranting Curve method (RC), the small-scale exponent detected by the FATS is estimated to be 1.02 ± 0.42% less than that estimated by RC. More importantly, the crossover times evaluated from the FATS delayed approximately by 42 ± 21 hr ≈2-3 days than their counterparts estimated by RC. The power spectral density analysis assists our findings. We found that scaling characteristics information evaluated for a river using flux data obtained by RC approach might not be accurately detected, because this classical method assumes that flow in river is steady and depends on constructing a relationship between discharge and water level, while the discharge obtained by the FATS decomposes velocity and depth into two ratings according to the continuity equation. Generally, this work highlights the performance of FATS as a powerful and effective approach for continuous streamflow measurements at high-frequency levels.

  3. Origin of the scaling laws of sediment transport

    NASA Astrophysics Data System (ADS)

    Ali, Sk Zeeshan; Dey, Subhasish

    2017-01-01

    In this paper, we discover the origin of the scaling laws of sediment transport under turbulent flow over a sediment bed, for the first time, from the perspective of the phenomenological theory of turbulence. The results reveal that for the incipient motion of sediment particles, the densimetric Froude number obeys the `(1 + σ)/4' scaling law with the relative roughness (ratio of particle diameter to approach flow depth), where σ is the spectral exponent of turbulent energy spectrum. However, for the bedforms, the densimetric Froude number obeys a `(1 + σ)/6' scaling law with the relative roughness in the enstrophy inertial range and the energy inertial range. For the bedload flux, the bedload transport intensity obeys the `3/2' and `(1 + σ)/4' scaling laws with the transport stage parameter and the relative roughness, respectively. For the suspended load flux, the non-dimensional suspended sediment concentration obeys the `-Z ' scaling law with the non-dimensional vertical distance within the wall shear layer, where Z is the Rouse number. For the scour in contracted streams, the non-dimensional scour depth obeys the `4/(3 - σ)', `-4/(3 - σ)' and `-(1 + σ)/(3 - σ)' scaling laws with the densimetric Froude number, the channel contraction ratio (ratio of contracted channel width to approach channel width) and the relative roughness, respectively.

  4. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach), and iii) process-level evaluation at climate time-scales. The advantages and disadvantages of each approach will be identified and discussed, and some thoughts about possible future developments will be given.

  5. Identifying the time scale of synchronous movement: a study on tropical snakes.

    PubMed

    Lindström, Tom; Phillips, Benjamin L; Brown, Gregory P; Shine, Richard

    2015-01-01

    Individual movement is critical to organismal fitness and also influences broader population processes such as demographic stochasticity and gene flow. Climatic change and habitat fragmentation render the drivers of individual movement especially critical to understand. Rates of movement of free-ranging animals through the landscape are influenced both by intrinsic attributes of an organism (e.g., size, body condition, age), and by external forces (e.g., weather, predation risk). Statistical modelling can clarify the relative importance of those processes, because externally-imposed pressures should generate synchronous displacements among individuals within a population, whereas intrinsic factors should generate consistency through time within each individual. External and intrinsic factors may vary in importance at different time scales. In this study we focused on daily displacement of an ambush-foraging snake from tropical Australia (the Northern Death Adder Acanthophis praelongus), based on a radiotelemetric study. We used a mixture of spectral representation and Bayesian inference to study synchrony in snake displacement by phase shift analysis. We further studied autocorrelation in fluctuations of displacement distances as "one over f noise". Displacement distances were positively autocorrelated with all considered noise colour parameters estimated as >0. We show how the methodology can reveal time scales of particular interest for synchrony and found that for the analysed data, synchrony was only present at time scales above approximately three weeks. We conclude that the spectral representation combined with Bayesian inference is a promising approach for analysis of movement data. Applying the framework to telemetry data of A. praelongus, we were able to identify a cut-off time scale above which we found support for synchrony, thus revealing a time scale where global external drivers have a larger impact on the movement behaviour. Our results suggest that for the considered study period, movement at shorter time scales was primarily driven by factors at the individual level; daily fluctuations in weather conditions had little effect on snake movement.

  6. Downscaling modelling system for multi-scale air quality forecasting

    NASA Astrophysics Data System (ADS)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.

  7. Fractal and topological sustainable methods of overcoming expected uncertainty in the radiolocation of low-contrast targets and in the processing of weak multi-dimensional signals on the background of high-intensity noise: A new direction in the statistical decision theory

    NASA Astrophysics Data System (ADS)

    Potapov, A. A.

    2017-11-01

    The main purpose of this work is to interpret the main directions of radio physics, radio engineering and radio location in “fractal” language that makes new ways and generalizations on future promising radio systems. We introduce a new kind and approach of up-to-date radiolocation: fractal-scaling or scale-invariant radiolocation. The new topologic signs and methods of detecting the low-contrast objects against the high-intensity noise background are presented. It leads to basic changes in the theoretical radiolocation structure itself and also in its mathematical apparatus. The fractal radio systems conception, sampling topology, global fractal-scaling approach and the fractal paradigm underlie the scientific direction established by the author in Russia and all over the world for the first time ever.

  8. Metastable Distributions of Markov Chains with Rare Transitions

    NASA Astrophysics Data System (ADS)

    Freidlin, M.; Koralov, L.

    2017-06-01

    In this paper we consider Markov chains X^\\varepsilon _t with transition rates that depend on a small parameter \\varepsilon . We are interested in the long time behavior of X^\\varepsilon _t at various \\varepsilon -dependent time scales t = t(\\varepsilon ). The asymptotic behavior depends on how the point (1/\\varepsilon , t(\\varepsilon )) approaches infinity. We introduce a general notion of complete asymptotic regularity (a certain asymptotic relation between the ratios of transition rates), which ensures the existence of the metastable distribution for each initial point and a given time scale t(\\varepsilon ). The technique of i-graphs allows one to describe the metastable distribution explicitly. The result may be viewed as a generalization of the ergodic theorem to the case of parameter-dependent Markov chains.

  9. Non-monotonicity and divergent time scale in Axelrod model dynamics

    NASA Astrophysics Data System (ADS)

    Vazquez, F.; Redner, S.

    2007-04-01

    We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.

  10. An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling

    NASA Astrophysics Data System (ADS)

    Bradley, C.; Massof, R. W.

    2016-11-01

    Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.

  11. A stochastic model of particle dispersion in turbulent reacting gaseous environments

    NASA Astrophysics Data System (ADS)

    Sun, Guangyuan; Lignell, David; Hewson, John

    2012-11-01

    We are performing fundamental studies of dispersive transport and time-temperature histories of Lagrangian particles in turbulent reacting flows. The particle-flow statistics including the full particle temperature PDF are of interest. A challenge in modeling particle motions is the accurate prediction of fine-scale aerosol-fluid interactions. A computationally affordable stochastic modeling approach, one-dimensional turbulence (ODT), is a proven method that captures the full range of length and time scales, and provides detailed statistics of fine-scale turbulent-particle mixing and transport. Limited results of particle transport in ODT have been reported in non-reacting flow. Here, we extend ODT to particle transport in reacting flow. The results of particle transport in three flow configurations are presented: channel flow, homogeneous isotropic turbulence, and jet flames. We investigate the functional dependence of the statistics of particle-flow interactions including (1) parametric study with varying temperatures, Reynolds numbers, and particle Stokes numbers; (2) particle temperature histories and PDFs; (3) time scale and the sensitivity of initial and boundary conditions. Flow statistics are compared to both experimental measurements and DNS data.

  12. In Vivo Protein Dynamics on the Nanometer Length Scale and Nanosecond Time Scale

    DOE PAGES

    Anunciado, Divina B.; Nyugen, Vyncent P.; Hurst, Gregory B.; ...

    2017-04-07

    Selectively labeled GroEL protein was produced in living deuterated bacterial cells to enhance its neutron scattering signal above that of the intracellular milieu. Quasi-elastic neutron scattering shows that the in-cell diffusion coefficient of GroEL was (4.7 ± 0.3) × 10 –12 m 2/s, a factor of 4 slower than its diffusion coefficient in buffer solution. Furthermore, for internal protein dynamics we see a relaxation time of (65 ± 6) ps, a factor of 2 slower compared to the protein in solution. Comparison to the literature suggests that the effective diffusivity of proteins depends on the length and time scale beingmore » probed. Retardation of in-cell diffusion compared to the buffer becomes more significant with the increasing probe length scale, suggesting that intracellular diffusion of biomolecules is nonuniform over the cellular volume. This approach outlined here enables investigation of protein dynamics within living cells to open up new lines of research using “in-cell neutron scattering” to study the dynamics of complex biomolecular systems.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  14. In Vivo Protein Dynamics on the Nanometer Length Scale and Nanosecond Time Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anunciado, Divina B.; Nyugen, Vyncent P.; Hurst, Gregory B.

    Selectively labeled GroEL protein was produced in living deuterated bacterial cells to enhance its neutron scattering signal above that of the intracellular milieu. Quasi-elastic neutron scattering shows that the in-cell diffusion coefficient of GroEL was (4.7 ± 0.3) × 10 –12 m 2/s, a factor of 4 slower than its diffusion coefficient in buffer solution. Furthermore, for internal protein dynamics we see a relaxation time of (65 ± 6) ps, a factor of 2 slower compared to the protein in solution. Comparison to the literature suggests that the effective diffusivity of proteins depends on the length and time scale beingmore » probed. Retardation of in-cell diffusion compared to the buffer becomes more significant with the increasing probe length scale, suggesting that intracellular diffusion of biomolecules is nonuniform over the cellular volume. This approach outlined here enables investigation of protein dynamics within living cells to open up new lines of research using “in-cell neutron scattering” to study the dynamics of complex biomolecular systems.« less

  15. Joint scale-change models for recurrent events and failure time.

    PubMed

    Xu, Gongjun; Chiou, Sy Han; Huang, Chiung-Yu; Wang, Mei-Cheng; Yan, Jun

    2017-01-01

    Recurrent event data arise frequently in various fields such as biomedical sciences, public health, engineering, and social sciences. In many instances, the observation of the recurrent event process can be stopped by the occurrence of a correlated failure event, such as treatment failure and death. In this article, we propose a joint scale-change model for the recurrent event process and the failure time, where a shared frailty variable is used to model the association between the two types of outcomes. In contrast to the popular Cox-type joint modeling approaches, the regression parameters in the proposed joint scale-change model have marginal interpretations. The proposed approach is robust in the sense that no parametric assumption is imposed on the distribution of the unobserved frailty and that we do not need the strong Poisson-type assumption for the recurrent event process. We establish consistency and asymptotic normality of the proposed semiparametric estimators under suitable regularity conditions. To estimate the corresponding variances of the estimators, we develop a computationally efficient resampling-based procedure. Simulation studies and an analysis of hospitalization data from the Danish Psychiatric Central Register illustrate the performance of the proposed method.

  16. Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal

    NASA Astrophysics Data System (ADS)

    Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana

    2014-05-01

    The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling the properties of a long-range correlated process.

  17. Does remote sensing help translating local SGD investigation to large spatial scales?

    NASA Astrophysics Data System (ADS)

    Moosdorf, N.; Mallast, U.; Hennig, H.; Schubert, M.; Knoeller, K.; Neehaul, Y.

    2016-02-01

    Within the last 20 years, studies on submarine groundwater discharge (SGD) have revealed numerous processes, temporal behavior and quantitative estimations as well as best-practice and localization methods. This plethora on information is valuable regarding the understanding of magnitude and effects of SGD for the respective location. Yet, since given local conditions vary, the translation of local understanding, magnitudes and effects to a regional or global scale is not trivial. In contrast, modeling approaches (e.g. 228Ra budget) tackling SGD on a global scale do provide quantitative global estimates but have not been related to local investigations. This gap between the two approaches, local and global, and the combination and/or translation of either one to the other represents one of the mayor challenges the SGD community currently faces. But what if remote sensing can provide certain information that may be used as translation between the two, similar to transfer functions in many other disciplines allowing an extrapolation from in-situ investigated and quantified SGD (discrete information) to regional scales or beyond? Admittedly, the sketched future is ambitious and we will certainly not be able to present a solution to the raised question. Nonetheless, we will show a remote sensing based approach that is already able to identify potential SGD sites independent on location or hydrogeological conditions. Based on multi-temporal thermal information of the water surface as core of the approach, SGD influenced sites display a smaller thermal variation (thermal anomalies) than surrounding uninfluenced areas. Despite the apparent simplicity, the automatized approach has helped to localize several sites that could be validated with proven in-situ methods. At the same time it embodies the risk to identify false positives that can only be avoided if we can `calibrate' the so obtained thermal anomalies to in-situ data. We will present all pros and cons of our approach with the intention to contribute to the solution of translating SGD investigation to larger scales.

  18. Advances in Landslide Nowcasting: Evaluation of a Global and Regional Modeling Approach

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia Bach; Peters-Lidard, Christa; Adler, Robert; Hong, Yang; Kumar, Sujay; Lerner-Lam, Arthur

    2011-01-01

    The increasing availability of remotely sensed data offers a new opportunity to address landslide hazard assessment at larger spatial scales. A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that may experience landslide activity. This system combines a calculation of static landslide susceptibility with satellite-derived rainfall estimates and uses a threshold approach to generate a set of nowcasts that classify potentially hazardous areas. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale near real-time landslide hazard assessment efforts, it requires several modifications before it can be fully realized as an operational tool. This study draws upon a prior work s recommendations to develop a new approach for considering landslide susceptibility and hazard at the regional scale. This case study calculates a regional susceptibility map using remotely sensed and in situ information and a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America. The susceptibility map is evaluated with a regional rainfall intensity duration triggering threshold and results are compared with the global algorithm framework for the same event. Evaluation of this regional system suggests that this empirically based approach provides one plausible way to approach some of the data and resolution issues identified in the global assessment. The presented methodology is straightforward to implement, improves upon the global approach, and allows for results to be transferable between regions. The results also highlight several remaining challenges, including the empirical nature of the algorithm framework and adequate information for algorithm validation. Conclusions suggest that integrating additional triggering factors such as soil moisture may help to improve algorithm performance accuracy. The regional algorithm scenario represents an important step forward in advancing regional and global-scale landslide hazard assessment.

  19. Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation

    PubMed Central

    Balaguer-Ballester, Emili; Clark, Nicholas R.; Coath, Martin; Krumbholz, Katrin; Denham, Susan L.

    2009-01-01

    Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing. PMID:19266015

  20. Testing optimal foraging theory in a penguin-krill system.

    PubMed

    Watanabe, Yuuki Y; Ito, Motohiro; Takahashi, Akinori

    2014-03-22

    Food is heterogeneously distributed in nature, and understanding how animals search for and exploit food patches is a fundamental challenge in ecology. The classic marginal value theorem (MVT) formulates optimal patch residence time in response to patch quality. The MVT was generally proved in controlled animal experiments; however, owing to the technical difficulties in recording foraging behaviour in the wild, it has been inadequately examined in natural predator-prey systems, especially those in the three-dimensional marine environment. Using animal-borne accelerometers and video cameras, we collected a rare dataset in which the behaviour of a marine predator (penguin) was recorded simultaneously with the capture timings of mobile, patchily distributed prey (krill). We provide qualitative support for the MVT by showing that (i) krill capture rate diminished with time in each dive, as assumed in the MVT, and (ii) dive duration (or patch residence time, controlled for dive depth) increased with short-term, dive-scale krill capture rate, but decreased with long-term, bout-scale krill capture rate, as predicted from the MVT. Our results demonstrate that a single environmental factor (i.e. patch quality) can have opposite effects on animal behaviour depending on the time scale, emphasizing the importance of multi-scale approaches in understanding complex foraging strategies.

  1. Real-time object detection and semantic segmentation for autonomous driving

    NASA Astrophysics Data System (ADS)

    Li, Baojun; Liu, Shun; Xu, Weichao; Qiu, Wei

    2018-02-01

    In this paper, we proposed a Highly Coupled Network (HCNet) for joint objection detection and semantic segmentation. It follows that our method is faster and performs better than the previous approaches whose decoder networks of different tasks are independent. Besides, we present multi-scale loss architecture to learn better representation for different scale objects, but without extra time in the inference phase. Experiment results show that our method achieves state-of-the-art results on the KITTI datasets. Moreover, it can run at 35 FPS on a GPU and thus is a practical solution to object detection and semantic segmentation for autonomous driving.

  2. Complexity analyses show two distinct types of nonlinear dynamics in short heart period variability recordings

    PubMed Central

    Porta, Alberto; Bari, Vlasta; Marchi, Andrea; De Maria, Beatrice; Cysarz, Dirk; Van Leeuwen, Peter; Takahashi, Anielle C. M.; Catai, Aparecida M.; Gnecchi-Ruscone, Tomaso

    2015-01-01

    Two diverse complexity metrics quantifying time irreversibility and local prediction, in connection with a surrogate data approach, were utilized to detect nonlinear dynamics in short heart period (HP) variability series recorded in fetuses, as a function of the gestational period, and in healthy humans, as a function of the magnitude of the orthostatic challenge. The metrics indicated the presence of two distinct types of nonlinear HP dynamics characterized by diverse ranges of time scales. These findings stress the need to render more specific the analysis of nonlinear components of HP dynamics by accounting for different temporal scales. PMID:25806002

  3. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  4. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2014-04-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision-making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma of how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, another way out is developed: to face human agency squarely, and direct the modeling approach to the agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics, as these provide the context within which human agency is acted out.

  5. A journey of a thousand miles begins with one small step - human agency, hydrological processes and time in socio-hydrology

    NASA Astrophysics Data System (ADS)

    Ertsen, M. W.; Murphy, J. T.; Purdue, L. E.; Zhu, T.

    2013-11-01

    When simulating social action in modeling efforts, as in socio-hydrology, an issue of obvious importance is how to ensure that social action by human agents is well-represented in the analysis and the model. Generally, human decision-making is either modeled on a yearly basis or lumped together as collective social structures. Both responses are problematic, as human decision making is more complex and organizations are the result of human agency and cannot be used as explanatory forces. A way out of the dilemma how to include human agency is to go to the largest societal and environmental clustering possible: society itself and climate, with time steps of years or decades. In the paper, the other way out is developed: to face human agency squarely, and direct the modeling approach to the human agency of individuals and couple this with the lowest appropriate hydrological level and time step. This approach is supported theoretically by the work of Bruno Latour, the French sociologist and philosopher. We discuss irrigation archaeology, as it is in this discipline that the issues of scale and explanatory force are well discussed. The issue is not just what scale to use: it is what scale matters. We argue that understanding the arrangements that permitted the management of irrigation over centuries, requires modeling and understanding the small-scale, day-to-day operations and personal interactions upon which they were built. This effort, however, must be informed by the longer-term dynamics as these provide the context within which human agency, is acted out.

  6. Scale/Analytical Analyses of Freezing and Convective Melting with Internal Heat Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali S. Siahpush; John Crepeau; Piyush Sabharwall

    2013-07-01

    Using a scale/analytical analysis approach, we model phase change (melting) for pure materials which generate constant internal heat generation for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. The analysis also consider constant heat flux (in a cylindrical geometry).We show the time scales in which conduction and convection heat transfer dominate.

  7. Statistical analysis of time transfer data from Timation 2. [US Naval Observatory and Australia

    NASA Technical Reports Server (NTRS)

    Luck, J. M.; Morgan, P.

    1974-01-01

    Between July 1973 and January 1974, three time transfer experiments using the Timation 2 satellite were conducted to measure time differences between the U.S. Naval Observatory and Australia. Statistical tests showed that the results are unaffected by the satellite's position with respect to the sunrise/sunset line or by its closest approach azimuth at the Australian station. Further tests revealed that forward predictions of time scale differences, based on the measurements, can be made with high confidence.

  8. Time-Resolved Macromolecular Crystallography at Modern X-Ray Sources.

    PubMed

    Schmidt, Marius

    2017-01-01

    Time-resolved macromolecular crystallography unifies protein structure determination with chemical kinetics. With the advent of fourth generation X-ray sources the time-resolution can be on the order of 10-40 fs, which opens the ultrafast time scale to structure determination. Fundamental motions and transitions associated with chemical reactions in proteins can now be observed. Moreover, new experimental approaches at synchrotrons allow for the straightforward investigation of all kind of reactions in biological macromolecules. Here, recent developments in the field are reviewed.

  9. Distributed Traffic Control for Reduced Fuel Consumption and Travel Time in Transportation Networks

    DOT National Transportation Integrated Search

    2018-04-01

    Current technology in traffic control is limited to a centralized approach that has not paid appropriate attention to efficiency of fuel consumption and is subject to the scale of transportation networks. This project proposes a transformative approa...

  10. Wave Impact on a Wall: Comparison of Experiments with Similarity Solutions

    NASA Astrophysics Data System (ADS)

    Wang, A.; Duncan, J. H.; Lathrop, D. P.

    2014-11-01

    The impact of a steep water wave on a fixed partially submerged cube is studied with experiments and theory. The temporal evolution of the water surface profile upstream of the front face of the cube in its center plane is measured with a cinematic laser-induced fluorescence technique using frame rates up to 4,500 Hz. For a small range of cube positions, the surface profiles are found to form a nearly circular arc with upward curvature between the front face of the cube and a point just downstream of the wave crest. As the crest approaches the cube, the effective radius of this portion of the profile decreases rapidly. At the same time, the portion of the profile that is upstream of the crest approaches a straight line with a downward slope of about 15°. As the wave impact continues, the circular arc shrinks to zero radius with very high acceleration and a sudden transition to a high-speed vertical jet occurs. This flow singularity is modeled with a power-law scaling in time, which is used to create a time-independent system of equations of motion. The scaled governing equations are solved numerically and the similarly scaled measured free surface shapes, are favorably compared with the solutions. The support of the Office of Naval Research is gratefully acknowledged.

  11. Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts

    PubMed Central

    Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk

    2014-01-01

    In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735

  12. Approaches to standardization of atmospheric pollution undergoing long-range and transboundary transport.

    PubMed

    Izrael, Y A; Nazarov, I M; Ryaboshapko, A G

    1982-12-01

    The authors consider some possible ways of regulating three types of atmospheric emission of pollutants: - emission of substances causing pollution of the natural environment on the global scale (global pollutants); - emission of substances causing pollution on a regional scale, most often including territories of several countries (international pollutants); - emission of substances causing negative effects in a relatively limited region, for example within border area of two adjoining countries. Substances (gaseous, as a rule) of a long life-time in the atmosphere that can contaminate natural media on a global scale irrespective of the place of emission refer to the first class of pollutants that are subject to emission regulation at an international level and to quota establishement for individual countries. They are carbon dioxide, freon, krypton-85.Various approaches to determining permissible emission and to quota establishing are discussed in the paper.The second group includes substances of a limited, yet rather long, life-time whose emission intensity makes a notable contribution to environmental pollution of a large region including territories of several countries. Here it is needed to regulate internationally not the atmospheric emission as it is but pollutant transport over national boundaries (sulphur and nitrogen oxides, pesticides, heavy metals).The third group includes substances of relatively short time of life producing local effects. Emission regulation in such cases should be based upon bilateral agreements with due account of countries' mutual interests.

  13. “It sounds like…”: A Natural Language Processing Approach to Detecting Counselor Reflections in Motivational Interviewing

    PubMed Central

    Can, Doğan; Marín, Rebeca A.; Georgiou, Panayiotis G.; Imel, Zac E.; Atkins, David C.; Narayanan, Shrikanth S.

    2016-01-01

    The dissemination and evaluation of evidence based behavioral treatments for substance abuse problems rely on the evaluation of counselor interventions. In Motivational Interviewing (MI), a treatment that directs the therapist to utilize a particular linguistic style, proficiency is assessed via behavioral coding - a time consuming, non-technological approach. Natural language processing techniques have the potential to scale up the evaluation of behavioral treatments like MI. We present a novel computational approach to assessing components of MI, focusing on one specific counselor behavior – reflections – that are believed to be a critical MI ingredient. Using 57 sessions from 3 MI clinical trials, we automatically detected counselor reflections in a Maximum Entropy Markov Modeling framework using the raw linguistic data derived from session transcripts. We achieved 93% recall, 90% specificity, and 73% precision. Results provide insight into the linguistic information used by coders to make ratings and demonstrate the feasibility of new computational approaches to scaling up the evaluation of behavioral treatments. PMID:26784286

  14. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  15. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  16. The use of locally optimal trajectory management for base reaction control of robots in a microgravity environment

    NASA Technical Reports Server (NTRS)

    Lin, N. J.; Quinn, R. D.

    1991-01-01

    A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.

  17. Large Scale Synthesis of Colloidal Si Nanocrystals and their Helium Plasma Processing into Spin-On, Carbon-Free Nanocrystalline Si Films.

    PubMed

    Mohapatra, Pratyasha; Mendivelso-Perez, Deyny; Bobbitt, Jonathan M; Shaw, Santosh; Yuan, Bin; Tian, Xinchun; Smith, Emily A; Cademartiri, Ludovico

    2018-05-30

    This paper describes a simple approach to the large scale synthesis of colloidal Si nanocrystals and their processing by He plasma into spin-on carbon-free nanocrystalline Si films. We further show that the RIE etching rate in these films is 1.87 times faster than for single crystalline Si, consistent with a simple geometric argument that accounts for the nanoscale roughness caused by the nanoparticle shape.

  18. A Multi-Scale Structural Health Monitoring Approach for Damage Detection, Diagnosis and Prognosis in Aerospace Structures

    DTIC Science & Technology

    2012-01-20

    ultrasonic Lamb waves to plastic strain and fatigue life. Theory was developed and validated to predict second harmonic generation for specific mode... Fatigue and damage generation and progression are processes consisting of a series of interrelated events that span large scales of space and time...strain and fatigue life A set of experiments were completed that worked to relate the acoustic nonlinearity measured with Lamb waves to both the

  19. An expert system-based approach to prediction of year-to-year climatic variations in the North Atlantic region

    NASA Astrophysics Data System (ADS)

    Rodionov, S. N.; Martin, J. H.

    1999-07-01

    A novel approach to climate forecasting on an interannual time scale is described. The approach is based on concepts and techniques from artificial intelligence and expert systems. The suitability of this approach to climate diagnostics and forecasting problems and its advantages compared with conventional forecasting techniques are discussed. The article highlights some practical aspects of the development of climatic expert systems (CESs) and describes an implementation of such a system for the North Atlantic (CESNA). Particular attention is paid to the content of CESNA's knowledge base and those conditions that make climatic forecasts one to several years in advance possible. A detailed evaluation of the quality of the experimental real-time forecasts made by CESNA for the winters of 1995-1996, 1996-1997 and 1997-1998 are presented.

  20. Large-Scale CTRW Analysis of Push-Pull Tracer Tests and Other Transport in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Berkowitz, B.

    2014-12-01

    Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.

  1. Structures and Intermittency in a Passive Scalar Model

    NASA Astrophysics Data System (ADS)

    Vergassola, M.; Mazzino, A.

    1997-09-01

    Perturbative expansions for intermittency scaling exponents in the Kraichnan passive scalar model [Phys. Rev. Lett. 72, 1016 (1994)] are investigated. A one-dimensional compressible model is considered for this purpose. High resolution Monte Carlo simulations using an Ito approach adapted to an advecting velocity field with a very short correlation time are performed and lead to clean scaling behavior for passive scalar structure functions. Perturbative predictions for the scaling exponents around the Gaussian limit of the model are derived as in the Kraichnan model. Their comparison with the simulations indicates that the scale-invariant perturbative scheme correctly captures the inertial range intermittency corrections associated with the intense localized structures observed in the dynamics.

  2. Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach

    NASA Astrophysics Data System (ADS)

    Comolli, Alessandro; Dentz, Marco

    2017-09-01

    We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in heterogeneous natural and engineered porous materials. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  3. Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini

    2014-12-01

    The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.

  4. Accurate age estimation in small-scale societies

    PubMed Central

    Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.

    2017-01-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282

  5. Accurate age estimation in small-scale societies.

    PubMed

    Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G

    2017-08-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.

  6. A procedure for assessing future trends of subdaily precipitation values on point scale

    NASA Astrophysics Data System (ADS)

    Rianna, Guido; Villani, Veronica; Mercogliano, Paola; Vezzoli, Renata

    2015-04-01

    In many areas of Italy, urban flooding or floods in small mountain basins, induced by heavy precipitations on subdaily scale, represent remarkable hazards able to cause huge damages and casualties often increased by very high population density. A proper assessment about how frequency and magnitude of such events could change under the effect of Climate Changes (CC) is crucial for the development of future territorial planning (such as early warning systems). The current constraints of climate modeling, also using high resolution RCM, prevent an adequate representation of subdaily precipitation patterns (mainly concerning extreme values) while available observed datasets are often unsuitable for the application of the bias-correction (BC) techniques requiring long time series. In this work, a new procedure is proposed: at point scale, precipitation outputs on 24 and 48 hours are provided by high resolution (about 8km) climate simulation performed through the RCM COSMO_CLM driven by GCM CMCC_CM and bias-corrected by quantile mapping approach. These ones are adopted for a monthly stochastic disaggregation approach combining Random Parameter Bartlett-Lewis (RPBL) gamma model with appropriate rainfall disaggregation technique. The last one implements empirical correction procedures, called adjusting procedures, to modify the model rainfall output, so that it is consistent with the observed rainfall values on daily time scale. In order to take into account the great difficulties related to minimization of objective function required by retrieving the 7 RPBL parameters, for each dataset the computations are repeated twenty times. Moreover, adopting statistical properties on 24 and 48 hours to retrieve RPBL parameters allows, according Bo et al. (1994), to infer statistical properties until hourly scale maintaining the information content about the possible changes in precipitation patterns due to CC. The entire simulation chain is tested on Baiso weather station, in Northern Italy; the station is representative of a basin of Secchia river, tributary of the Po River; for this station, are available hourly data on 2003-2012 time span while, since 1981, are available daily data and maximum yearly values until hourly scale. In order to evaluate the uncertainties related to stand-alone approach for retrieving hourly data, it is first tested adopting, as input, observed data on 1981-2010 period; after, for the same time interval, RPBL parameters are estimated using BC RCM precipitation data. However, as control, the available hourly data cover only a part of this span. The results show how the approach, in term of mean and maximum values, return satisfying results until 6 hours while for higher resolutions the errors became significant. Finally, in order to assess the possible effects of CC on subdaily precipitation patterns, the same simulation chain is adopted to provide hourly precipitation datasets also for thirty years 2071-2100 under concentration scenarios RCPs 4.5 and RCP 8.5; the comparison between these ones and control period, permits to understand how, in wet season, the expected warming could produce a reduction in mean duration of precipitation events but with higher rainfall intensity; however, during the summer, the strong reduction in precipitation values could deeply affect also hourly values.

  7. The role of soil weathering and hydrology in regulating chemical fluxes from catchments (Invited)

    NASA Astrophysics Data System (ADS)

    Maher, K.; Chamberlain, C. P.

    2010-12-01

    Catchment-scale chemical fluxes have been linked to a number of different parameters that describe the conditions at the Earth’s surface, including runoff, temperature, rock type, vegetation, and the rate of tectonic uplift. However, many of the relationships relating chemical denudation to surface processes and conditions, while based on established theoretical principles, are largely empirical and derived solely from modern observations. Thus, an enhanced mechanistic basis for linking global solute fluxes to both surface processes and climate may improve our confidence in extrapolating modern solute fluxes to past and future conditions. One approach is to link observations from detailed soil-based studies with catchment-scale properties. For example, a number of recent studies of chemical weathering at the soil-profile scale have reinforced the importance of hydrologic processes in controlling chemical weathering rates. An analysis of data from granitic soils shows that weathering rates decrease with increasing fluid residence times and decreasing flow rates—over moderate fluid residence times, from 5 days to 10 years, transport-controlled weathering explains the orders of magnitude variation in weathering rates to a better extent than soil age. However, the importance of transport-controlled weathering is difficult to discern at the catchment scale because of the range of flow rates and fluid residence times captured by a single discharge or solute flux measurement. To assess the importance of transport-controlled weathering on catchment scale chemical fluxes, we present a model that links the chemical flux to the extent of reaction between the soil waters and the solids, or the fluid residence time. Different approaches for describing the distribution of fluid residence times within a catchment are then compared with the observed Si fluxes for a limited number of catchments. This model predicts high solute fluxes in regions with high run-off, relief, and long flow paths suggesting that the particular hydrologic setting of a landscape will be the underlying control on the chemical fluxes. As such, we reinterpret the large chemical fluxes that are observed in active mountain belts, like the Himalaya, to be primarily controlled by the long reactive flow paths created by the steep terrain coupled with high amounts of precipitation.

  8. Toward a space-time scale framework for the study of everyday life activity's adaptation to hazardous hydro-meteorological conditions: Learning from the June 15th, 2010 flash flood event in Draguignan (France)

    NASA Astrophysics Data System (ADS)

    Ruin, Isabelle; Boudevillain, Brice; Creutin, Jean-Dominique; Lutoff, Céline

    2013-04-01

    Western Mediterranean regions are favorable locations for heavy precipitating events. In recent years, many of them resulted in destructive flash floods with extended damage and loss of life: Nîmes 1988, Vaison-la-Romaine 1992, Aude 1999 and Gard 2002 and 2005. Because of the suddenness in the rise of water levels and the limited forecasting predictability, flash floods often surprise people in the midst of their daily activity and force them to react in a very limited amount of time. In such fast evolving events impacts depend not just on such compositional variables as the magnitude of the flood event and the vulnerability of those affected, but also on such contextual factors as its location and timing (night, rush hours, working hours...). Those contextual factors can alter the scale and social distribution of impacts and vulnerability to them. In the case of flooding fatalities, for instance, the elderly are often said to be the most vulnerable, but when fatalities are mapped against basin size and response time, it has been shown that in fact it is young adults who are most likely to be killed in flash flooding of small catchments, whereas the elderly are the most frequent victim of large scale fluvial flooding. Further investigations in the Gard region have shown that such tendency could be explained by a difference of attitude across ages with respect to mobility related to daily life routine and constraints. According to a survey of intentional behavior professionals appear to be less prone to adapting their daily activities and mobility to rapidly changing environmental conditions than non-professionals. Nevertheless, even if this appears as a tendency in both the analysis of limited data on death circumstances and intended behavior surveys, behavioral verification is very much needed. Understanding how many and why people decide to travel in hazardous weather conditions and how they adapt (or not) their activities and schedule in response to environmental perturbations requires an integrated approach, sensitive to the spatial and temporal dynamics of geophysical hazards and responses to them. Such integrated approaches of the Coupled Human and Natural System have been more common in the environmental change arena than in risk studies. Nevertheless, examining interactions between routine activity-travel patterns and hydro-meteorological dynamics in the context of flash flood event resulted in developing a space-time scale approach that brought new insights to vulnerability and risk studies. This scaling approach requires suitable data sets including information about the meteorological and local flooding dynamics, the perception of environmental cues, the changes in individuals' activity-travel patterns and the social interactions at the place and time where the actions were performed. Even if these types of data are commonly collected in various disciplinary research contexts, they are seldom collected all together and in the context of post-disaster studies. This paper describes the methodological developments of our approach and applies our data collection method to the case of the June 15th, 2010 flash flood events in the Draguignan area (Var, France). This flash flood event offers a typical example to study the relation between the flood dynamics and the social response in the context of a sudden degradation of the environment.

  9. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  10. Molecular Imaging of Kerogen and Minerals in Shale Rocks across Micro- and Nano- Scales

    NASA Astrophysics Data System (ADS)

    Hao, Z.; Bechtel, H.; Sannibale, F.; Kneafsey, T. J.; Gilbert, B.; Nico, P. S.

    2016-12-01

    Fourier transform infrared (FTIR) spectroscopy is a reliable and non-destructive quantitative method to evaluate mineralogy and kerogen content / maturity of shale rocks, although it is traditionally difficult to assess the organic and mineralogical heterogeneity at micrometer and nanometer scales due to the diffraction limit of the infrared light. However, it is truly at these scales that the kerogen and mineral content and their formation in share rocks determines the quality of shale gas reserve, the gas flow mechanisms and the gas production. Therefore, it's necessary to develop new approaches which can image across both micro- and nano- scales. In this presentation, we will describe two new molecular imaging approaches to obtain kerogen and mineral information in shale rocks at the unprecedented high spatial resolution, and a cross-scale quantitative multivariate analysis method to provide rapid geochemical characterization of large size samples. The two imaging approaches are enhanced at nearfield respectively by a Ge-hemisphere (GE) and by a metallic scanning probe (SINS). The GE method is a modified microscopic attenuated total reflectance (ATR) method which rapidly captures a chemical image of the shale rock surface at 1 to 5 micrometer resolution with a large field of view of 600 X 600 micrometer, while the SINS probes the surface at 20 nm resolution which provides a chemically "deconvoluted" map at the nano-pore level. The detailed geochemical distribution at nanoscale is then used to build a machine learning model to generate self-calibrated chemical distribution map at micrometer scale with the input of the GE images. A number of geochemical contents across these two important scales are observed and analyzed, including the minerals (oxides, carbonates, sulphides), the organics (carbohydrates, aromatics), and the absorbed gases. These approaches are self-calibrated, optics friendly and non-destructive, so they hold the potential to monitor shale gas flow at real time inside the micro- or nano- pore network, which is of great interest for optimizing the shale gas extraction.

  11. Acoustic Reflex and House-Brackmann Rating Scale as Prognostic Indicators of Peripheral Facial Palsy in Neuroborreliosis.

    PubMed

    Sekelj, Alen; Đanić, Davorin

    2017-09-01

    Lyme borreliosis is a vector-borne infectious disease characterized by three disease stages. In the areas endemic for borreliosis, every acute facial palsy indicates serologic testing and implies specific approach to the disease. Th e aim of the study was to identify and confirm the value of acoustic refl ex and House-Brackman (HB) grading scale as prognostic indicators of facial palsy in neuroborreliosis. Th e study included 176 patients with acute facial palsy divided into three groups based on serologic testing: borreliosis, Bell's palsy, and facial palsy caused by herpes simplex virus type 1 (HSV-1). Study patients underwent baseline audiometry with tympanometry and acoustic reflex, whereas current state of facial palsy was assessed by the HB scale. Subsequently, the same tests were obtained on three occasions, i.e. in week 3, 6 and 12 of presentation. Th e patients diagnosed with borreliosis, Bell's palsy and HSV-1 differed according to the time to acoustic refl ex recovery, which took longest time in patients with borreliosis. Th ese patients had the highest percentage of suprastapedial lesions at all time points and recovery was achieved later as compared with the other two diagnoses. Th e mean score on the HB scale declined with time, also at a slower rate in borreliosis patients. Th e prognosis of acoustic refl ex and facial palsy recovery according to HB scale was not associated with the length of elapsed time. The results obtained in the present study strongly confirmed the role of acoustic reflex and HB grading scale as prognostic indicators of facial palsy in neuroborreliosis.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan

    This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity ofmore » pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In both cases much more remains to be accomplished.« less

  13. General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae)

    PubMed Central

    Camacho, Ernesto Robayo; Chong, Juang-Horng

    2015-01-01

    We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance. PMID:26823990

  14. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  15. Structure from Motion vs. the Kinect: Comparisons of River Field Measurements at the 10-2 to 102 meter Scales

    NASA Astrophysics Data System (ADS)

    Fonstad, M. A.; Dietrich, J. T.

    2014-12-01

    At the very smallest spatial scales of fluvial field analysis, measurements made historically in situ are often now supplemented, or even replaced by, remote sensing methods. This is particularly true in the case of topographic and particle size measurement. In the field, the scales of in situ observation usually range from millimeters up to hundreds of meters. Two recent approaches for remote mapping of river environments at the scales of historical in situ observations are (1) camera-based structure from motion (SfM), and (2) active patterned-light measurement with devices such as the Kinect. Even if only carried by hand, these two approaches can produce topographic datasets over three to four orders of magnitude of spatial scale. Which approach is most useful? Previous studies have demonstrated that both SfM and the Kinect are precise and accurate over in situ field measurement scales; we instead turn to alternate comparative metrics to help determine which tools might be best for our river measurement tasks. These metrics might include (1) the ease of field use, (2) which general environments are or are not amenable to measurement, (3) robustness to changing environmental conditions, (4) ease of data processing, and (5) cost. We test these metrics in a variety of bar-scale fluvial field environments, including a large-river cobble bar, a sand-bedded river point bar, and a complex mountain stream bar. The structure from motion approach is field-equipment inexpensive, is viable over a wide range of environmental conditions, and is highly spatially scalable. The approach requires some type of spatial referencing to make the data useful. The Kinect has the advantages of an almost real-time display of collected data, so problems can be detected quickly, being fast and easy to use, and the data are collected with arbitrary but metric coordinates, so absolute referencing isn't needed to use the data for many problems. It has the disadvantages of its light field generally being unable to penetrate water surfaces, becoming unusable in strong sunlight, and providing so much data as to be sometimes unwieldy in the data processing stage.

  16. A Fully Automated Approach to Spike Sorting.

    PubMed

    Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F

    2017-09-13

    Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  18. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  19. Biogeochemistry from Gliders at the Hawaii Ocean Times-Series

    NASA Astrophysics Data System (ADS)

    Nicholson, D. P.; Barone, B.; Karl, D. M.

    2016-02-01

    At the Hawaii Ocean Time-series (HOT) autonomous, underwater gliders equipped with biogeochemical sensors observe the oceans for months at a time, sampling spatiotemporal scales missed by the ship-based programs. Over the last decade, glider data augmented by a foundation of time-series observations have shed light on biogeochemical dynamics occuring spatially at meso- and submesoscales and temporally on scales from diel to annual. We present insights gained from the synergy between glider observations, time-series measurements and remote sensing in the subtropical North Pacific. We focus on diel variability observed in dissolved oxygen and bio-optics and approaches to autonomously quantify net community production and gross primary production (GPP) as developed during the 2012 Hawaii Ocean Experiment - DYnamics of Light And Nutrients (HOE-DYLAN). Glider-based GPP measurements were extended to explore the relationship between GPP and mesoscale context over multiple years of Seaglider deployments.

  20. Challenge Online Time Series Clustering For Demand Response A Theory to Break the ‘Curse of Dimensionality'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Ranjan; Chelmis, Charalampos; Aman, Saima

    The advent of smart meters and advanced communication infrastructures catalyzes numerous smart grid applications such as dynamic demand response, and paves the way to solve challenging research problems in sustainable energy consumption. The space of solution possibilities are restricted primarily by the huge amount of generated data requiring considerable computational resources and efficient algorithms. To overcome this Big Data challenge, data clustering techniques have been proposed. Current approaches however do not scale in the face of the “increasing dimensionality” problem where a cluster point is represented by the entire customer consumption time series. To overcome this aspect we first rethinkmore » the way cluster points are created and designed, and then design an efficient online clustering technique for demand response (DR) in order to analyze high volume, high dimensional energy consumption time series data at scale, and on the fly. Our online algorithm is randomized in nature, and provides optimal performance guarantees in a computationally efficient manner. Unlike prior work we (i) study the consumption properties of the whole population simultaneously rather than developing individual models for each customer separately, claiming it to be a ‘killer’ approach that breaks the “curse of dimensionality” in online time series clustering, and (ii) provide tight performance guarantees in theory to validate our approach. Our insights are driven by the field of sociology, where collective behavior often emerges as the result of individual patterns and lifestyles.« less

Top