Sample records for integral time scale

  1. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  2. Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, Debojyoti; Constantinescu, Emil M.

    2016-06-23

    Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less

  3. Exploring the History of Time in an Integrated System: the Ramifications for Water

    NASA Astrophysics Data System (ADS)

    Green, M. B.; Adams, L. E.; Allen, T. L.; Arrigo, J. S.; Bain, D. J.; Bray, E. N.; Duncan, J. M.; Hermans, C. M.; Pastore, C.; Schlosser, C. A.; Vorosmarty, C. J.; Witherell, B. B.; Wollheim, W. M.; Wreschnig, A. J.

    2009-12-01

    Characteristic time scales are useful and simple descriptors of geophysical and socio-economic system dynamics. Focusing on the integrative nature of the hydrologic cycle, new insights into system couplings can be gained by compiling characteristic time scales of important processes driving these systems. There are many examples of changing characteristic time scales. Human life expectancy has increased over the recent history of medical advancement. The transport time of goods has decreased with the progression from horse to rail to car to plane. The transport time of information changed with the progression from letter to telegraph to telephone to networked computing. Soil residence time (pedogenesis to estuary deposition) has been influenced by changing agricultural technology, urbanization, and forest practices. Surface water residence times have varied as beaver dams have disappeared and been replaced with modern reservoirs, flood control works, and channelization. These dynamics raise the question of how these types of time scales interact with each other to form integrated Earth system dynamics? Here we explore the coupling of geophysical and socio-economic systems in the northeast United States over the 1600 to 2010 period by examining characteristic time scales. This visualization of many time scales serves as an exploratory analysis, producing new hypotheses about how the integrated system dynamics have evolved over the last 400 years. Specifically, exponential population growth and the evolving strategies to maintain that population appears as fundamental to many of the time scales.

  4. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  5. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  6. Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems

    DOEpatents

    Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.

    2002-01-01

    An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.

  7. HMC algorithm with multiple time scale integration and mass preconditioning

    NASA Astrophysics Data System (ADS)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  8. Lagrangian velocity and acceleration correlations of large inertial particles in a closed turbulent flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machicoane, Nathanaël; Volk, Romain

    We investigate the response of large inertial particle to turbulent fluctuations in an inhomogeneous and anisotropic flow. We conduct a Lagrangian study using particles both heavier and lighter than the surrounding fluid, and whose diameters are comparable to the flow integral scale. Both velocity and acceleration correlation functions are analyzed to compute the Lagrangian integral time and the acceleration time scale of such particles. The knowledge of how size and density affect these time scales is crucial in understanding particle dynamics and may permit stochastic process modelization using two-time models (for instance, Sawford’s). As particles are tracked over long timesmore » in the quasi-totality of a closed flow, the mean flow influences their behaviour and also biases the velocity time statistics, in particular the velocity correlation functions. By using a method that allows for the computation of turbulent velocity trajectories, we can obtain unbiased Lagrangian integral time. This is particularly useful in accessing the scale separation for such particles and to comparing it to the case of fluid particles in a similar configuration.« less

  9. An examination of the psychometric properties of the community integration questionnaire (CIQ) in spinal cord injury.

    PubMed

    Kratz, Anna L; Chadd, Edmund; Jensen, Mark P; Kehn, Matthew; Kroll, Thilo

    2015-07-01

    To examine the psychometric properties of the Community Integration Questionnaire (CIQ) in large samples of individuals with spinal cord injury (SCI). Longitudinal 12-month survey study. Nation-wide, community dwelling. Adults with SCI: 627 at Time 1, 494 at Time 2. Not applicable. The CIQ is a 15-item measure developed to measure three domains of community integration in individuals with traumatic brain injury: home integration, social integration, and productive activity. SCI consumer input suggested the need for two additional items assessing socializing at home and internet/email activity. Exploratory factor analyses at Time 1 indicated three factors. Time 2 confirmatory factor analysis did not show a good fit of the 3-factor model. CIQ scores were normally distributed and only the Productive subscale demonstrated problems with high (25%) ceiling effects. Internal reliability was acceptable for the Total and Home scales, but low for the Social and Productive activity scales. Validity of the CIQ is suggested by significant differences by sex, age, and wheelchair use. The factor structure of the CIQ was not stable over time. The CIQ may be most useful for assessing home integration, as this is the subscale with the most scale stability and internal reliability. The CIQ may be improved for use in SCI by including items that reflect higher levels of productive functioning, integration across the life span, and home- and internet-based social functioning.

  10. The theory of n-scales

    NASA Astrophysics Data System (ADS)

    Dündar, Furkan Semih

    2018-01-01

    We provide a theory of n-scales previously called as n dimensional time scales. In previous approaches to the theory of time scales, multi-dimensional scales were taken as product space of two time scales [1, 2]. n-scales make the mathematical structure more flexible and appropriate to real world applications in physics and related fields. Here we define an n-scale as an arbitrary closed subset of ℝn. Modified forward and backward jump operators, Δ-derivatives and Δ-integrals on n-scales are defined.

  11. Thermalization and light cones in a model with weak integrability breaking

    DOE PAGES

    Bertini, Bruno; Essler, Fabian H. L.; Groha, Stefan; ...

    2016-12-09

    Here, we employ equation-of-motion techniques to study the nonequilibrium dynamics in a lattice model of weakly interacting spinless fermions. Our model provides a simple setting for analyzing the effects of weak integrability-breaking perturbations on the time evolution after a quantum quench. We establish the accuracy of the method by comparing results at short and intermediate times to time-dependent density matrix renormalization group computations. For sufficiently weak integrability-breaking interactions we always observe prethermalization plateaus, where local observables relax to nonthermal values at intermediate time scales. At later times a crossover towards thermal behavior sets in. We determine the associated time scale,more » which depends on the initial state, the band structure of the noninteracting theory, and the strength of the integrability-breaking perturbation. Our method allows us to analyze in some detail the spreading of correlations and in particular the structure of the associated light cones in our model. We find that the interior and exterior of the light cone are separated by an intermediate region, the temporal width of which appears to scale with a universal power law t 1/3.« less

  12. A full-Bayesian approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod experimental site

    USGS Publications Warehouse

    Woodbury, Allan D.; Rubin, Yoram

    2000-01-01

    A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.

  13. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  14. Real-time adaptive ramp metering : phase I, MILOS proof of concept (multi-objective, integrated, large-scale, optimized system).

    DOT National Transportation Integrated Search

    2006-12-01

    Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...

  15. Random cascade model in the limit of infinite integral scale as the exponential of a nonstationary 1/f noise: Application to volatility fluctuations in stock markets

    NASA Astrophysics Data System (ADS)

    Muzy, Jean-François; Baïle, Rachel; Bacry, Emmanuel

    2013-04-01

    In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a nonstationary Gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involves a large-scale parameter (the so-called “integral scale” where the cascade is initiated) that is hard to interpret in finance. Moreover, the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model, which, as illustrated by various examples from daily stock index data, quantitatively reproduces the empirical observations.

  16. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  17. Conflict-driven adaptive control is enhanced by integral negative emotion on a short time scale.

    PubMed

    Yang, Qian; Pourtois, Gilles

    2018-02-05

    Negative emotion influences cognitive control, and more specifically conflict adaptation. However, discrepant results have often been reported in the literature. In this study, we broke down negative emotion into integral and incidental components using a modern motivation-based framework, and assessed whether the former could change conflict adaptation. In the first experiment, we manipulated the duration of the inter-trial-interval (ITI) to assess the actual time-scale of this effect. Integral negative emotion was induced by using loss-related feedback contingent on task performance, and measured at the subjective and physiological levels. Results showed that conflict-driven adaptive control was enhanced when integral negative emotion was elicited, compared to a control condition without changes in defensive motivation. Importantly, this effect was only found when a short, as opposed to long ITI was used, suggesting that it had a short time scale. In the second experiment, we controlled for effects of feature repetition and contingency learning, and replicated an enhanced conflict adaptation effect when integral negative emotion was elicited and a short ITI was used. We interpret these new results against a standard cognitive control framework assuming that integral negative emotion amplifies specific control signals transiently, and in turn enhances conflict adaptation.

  18. Impact of basin scale and time-weighted mercury metrics on intra-/inter-basin mercury comparisons

    Treesearch

    Paul Bradley; Mark E. Brigham

    2016-01-01

    Understanding anthropogenic and environmental controls on fluvial Mercury (Hg) bioaccumulation over global and national gradients can be challenging due to the need to integrate discrete-sample results from numerous small scale investigations. Two fundamental issues for such integrative Hg assessments are the wide range of basin scales for included studies and how well...

  19. Size Scales for Thermal Inhomogeneities in Mars' Atmosphere Surface Layer: Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Mihalov, John D.; Haberle, Robert M.; Seiff, Alvin; Murphy, James R.; Schofield, John T.; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    Atmospheric temperature measurement at three heights with thin wire thermocouples on the 1.1 m Mars Pathfinder meteorology must allow estimates of the integral scale of the atmospheric thermal turbulence during an 83 sol period that begins in the summer. The integral scale is a measure for regions of perturbations. In turbulent media that roughly characterizes locations where the perturbations are correlated. Excluding some to intervals with violent excursions of the mean temperatures, integral scale values are found that increase relatively rapidly from a few tenths meters or less near down to several meters by mid-morning. During mid-morning, the diurnal and shorter time scale wind direction variations often place the meteorology mast in the thermal wake of the Lander.

  20. Understanding Pitch Perception as a Hierarchical Process with Top-Down Modulation

    PubMed Central

    Balaguer-Ballester, Emili; Clark, Nicholas R.; Coath, Martin; Krumbholz, Katrin; Denham, Susan L.

    2009-01-01

    Pitch is one of the most important features of natural sounds, underlying the perception of melody in music and prosody in speech. However, the temporal dynamics of pitch processing are still poorly understood. Previous studies suggest that the auditory system uses a wide range of time scales to integrate pitch-related information and that the effective integration time is both task- and stimulus-dependent. None of the existing models of pitch processing can account for such task- and stimulus-dependent variations in processing time scales. This study presents an idealized neurocomputational model, which provides a unified account of the multiple time scales observed in pitch perception. The model is evaluated using a range of perceptual studies, which have not previously been accounted for by a single model, and new results from a neurophysiological experiment. In contrast to other approaches, the current model contains a hierarchy of integration stages and uses feedback to adapt the effective time scales of processing at each stage in response to changes in the input stimulus. The model has features in common with a hierarchical generative process and suggests a key role for efferent connections from central to sub-cortical areas in controlling the temporal dynamics of pitch processing. PMID:19266015

  1. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  2. Molecular dynamics at low time resolution.

    PubMed

    Faccioli, P

    2010-10-28

    The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.

  3. Cross-scale interactions, legacies, and spatial connectivity: integrating time and space to predict post-disturbance response across scales

    USDA-ARS?s Scientific Manuscript database

    Emergent properties and cross-scale interactions are important in driving landscape-scale dynamics during a disturbance event, such as wildfire. We used these concepts related to changing pattern-process relationships across scales to explain ecological responses following disturbance that resulted ...

  4. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  5. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less

  6. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    PubMed

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we propose that this coupling enhances the integration time constant, and hence the memory trace, of the circuit. Copyright © 2017 the American Physiological Society.

  7. Spectral analysis of temporal non-stationary rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2018-04-01

    This study treats the catchment as a block box system with considering the rainfall input and runoff output being a stochastic process. The temporal rainfall-runoff relationship at the catchment scale is described by a convolution integral on a continuous time scale. Using the Fourier-Stieltjes representation approach, a frequency domain solution to the convolution integral is developed to the spectral analysis of runoff processes generated by temporal non-stationary rainfall events. It is shown that the characteristic time scale of rainfall process increases the runoff discharge variability, while the catchment mean travel time constant plays the role in reducing the variability of runoff discharge. Similar to the behavior of groundwater aquifers, catchments act as a low-pass filter in the frequency domain for the rainfall input signal.

  8. Perspectives on integrated modeling of transport processes in semiconductor crystal growth

    NASA Technical Reports Server (NTRS)

    Brown, Robert A.

    1992-01-01

    The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.

  9. Turbulent transport with intermittency: Expectation of a scalar concentration.

    PubMed

    Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D

    2016-04-01

    Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.

  10. Analysis of DNA Sequences by an Optical Time-Integrating Correlator: Proposal

    DTIC Science & Technology

    1991-11-01

    OF THE PROBLEM AND CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0... DNA bases where each base is represented by a 7-bits long pseudorandom sequence. 9 Figure 5: The flow of data in a DNA analysis system based on an...logarithmic scale and a linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits

  11. Multi-Spatiotemporal Patterns of Residential Burglary Crimes in Chicago: 2006-2016

    NASA Astrophysics Data System (ADS)

    Luo, J.

    2017-10-01

    This research attempts to explore the patterns of burglary crimes at multi-spatiotemporal scales in Chicago between 2006 and 2016. Two spatial scales are investigated that are census block and police beat area. At each spatial scale, three temporal scales are integrated to make spatiotemporal slices: hourly scale with two-hour time step from 12:00am to the end of the day; daily scale with one-day step from Sunday to Saturday within a week; monthly scale with one-month step from January to December. A total of six types of spatiotemporal slices will be created as the base for the analysis. Burglary crimes are spatiotemporally aggregated to spatiotemporal slices based on where and when they occurred. For each type of spatiotemporal slices with burglary occurrences integrated, spatiotemporal neighborhood will be defined and managed in a spatiotemporal matrix. Hot-spot analysis will identify spatiotemporal clusters of each type of spatiotemporal slices. Spatiotemporal trend analysis is conducted to indicate how the clusters shift in space and time. The analysis results will provide helpful information for better target policing and crime prevention policy such as police patrol scheduling regarding times and places covered.

  12. Sound radiation from a subsonic rotor subjected to turbulence

    NASA Technical Reports Server (NTRS)

    Sevik, M.

    1974-01-01

    The broadband sound radiated by a subsonic rotor subjected to turbulence in the approach stream has been analyzed. The power spectral density of the sound intensity has been found to depend on a characteristic time scale-namely, the integral scale of the turbulence divided by the axial flow velocity-as well as several length-scale ratios. These consist of the ratio of the integral scale to the acoustic wavelength, rotor radius, and blade chord. Due to the simplified model chosen, only a limited number of cascade parameters appear. Limited comparisons with experimental data indicate good agreement with predicted values.

  13. Double Resonances and Spectral Scaling in the Weak Turbulence Theory of Rotating and Stratified Turbulence

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert

    1999-01-01

    In rotating turbulence, stably stratified turbulence, and in rotating stratified turbulence, heuristic arguments concerning the turbulent time scale suggest that the inertial range energy spectrum scales as k(exp -2). From the viewpoint of weak turbulence theory, there are three possibilities which might invalidate these arguments: four-wave interactions could dominate three-wave interactions leading to a modified inertial range energy balance, double resonances could alter the time scale, and the energy flux integral might not converge. It is shown that although double resonances exist in all of these problems, they do not influence overall energy transfer. However, the resonance conditions cause the flux integral for rotating turbulence to diverge logarithmically when evaluated for a k(exp -2) energy spectrum; therefore, this spectrum requires logarithmic corrections. Finally, the role of four-wave interactions is briefly discussed.

  14. Relativistic wide-angle galaxy bispectrum on the light cone

    NASA Astrophysics Data System (ADS)

    Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Liguori, Michele; Matarrese, Sabino; Verde, Licia

    2018-01-01

    Given the important role that the galaxy bispectrum has recently acquired in cosmology and the scale and precision of forthcoming galaxy clustering observations, it is timely to derive the full expression of the large-scale bispectrum going beyond approximated treatments which neglect integrated terms or higher-order bias terms or use the Limber approximation. On cosmological scales, relativistic effects that arise from observing the past light cone alter the observed galaxy number counts, therefore leaving their imprints on N-point correlators at all orders. In this paper we compute for the first time the bispectrum including all general relativistic, local and integrated, effects at second order, the tracers' bias at second order, geometric effects as well as the primordial non-Gaussianity contribution. This is timely considering that future surveys will probe scales comparable to the horizon where approximations widely used currently may not hold; neglecting these effects may introduce biases in estimation of cosmological parameters as well as primordial non-Gaussianity.

  15. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    PubMed

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Energy Systems Integration Facility Overview

    ScienceCinema

    Arvizu, Dan; Chistensen, Dana; Hannegan, Bryan; Garret, Bobi; Kroposki, Ben; Symko-Davies, Martha; Post, David; Hammond, Steve; Kutscher, Chuck; Wipke, Keith

    2018-01-16

    The U.S. Department of Energy's Energy Systems Integration Facility (ESIF) is located at the National Renewable Energy Laboratory is the right tool, at the right time... a first-of-its-kind facility that addresses the challenges of large-scale integration of clean energy technologies into the energy systems that power the nation.

  17. Integrated Doppler Correction to TWSTFT Using Round-Trip Measurement

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 251 INTEGRATED DOPPLER CORRECTION TO TWSTFT USING ROUND-TRIP MEASUREMENT Yi...Frequency Transfer ( TWSTFT ) data. It is necessary to correct the diurnal variation for comparing the time-scale difference. We focus on the up-/downlink...delay difference caused by satellite motion. In this paper, we propose to correct the TWSTFT data by using round-trip delay measurement. There are

  18. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  19. Inferring multi-scale neural mechanisms with brain network modelling

    PubMed Central

    Schirner, Michael; McIntosh, Anthony Randal; Jirsa, Viktor; Deco, Gustavo

    2018-01-01

    The neurophysiological processes underlying non-invasive brain activity measurements are incompletely understood. Here, we developed a connectome-based brain network model that integrates individual structural and functional data with neural population dynamics to support multi-scale neurophysiological inference. Simulated populations were linked by structural connectivity and, as a novelty, driven by electroencephalography (EEG) source activity. Simulations not only predicted subjects' individual resting-state functional magnetic resonance imaging (fMRI) time series and spatial network topologies over 20 minutes of activity, but more importantly, they also revealed precise neurophysiological mechanisms that underlie and link six empirical observations from different scales and modalities: (1) resting-state fMRI oscillations, (2) functional connectivity networks, (3) excitation-inhibition balance, (4, 5) inverse relationships between α-rhythms, spike-firing and fMRI on short and long time scales, and (6) fMRI power-law scaling. These findings underscore the potential of this new modelling framework for general inference and integration of neurophysiological knowledge to complement empirical studies. PMID:29308767

  20. Development of a survey instrument to measure patient experience of integrated care.

    PubMed

    Walker, Kara Odom; Stewart, Anita L; Grumbach, Kevin

    2016-06-01

    Healthcare systems are working to move towards more integrated, patient-centered care. This study describes the development and testing of a multidimensional self-report measure of patients' experiences of integrated care. Random-digit-dial telephone survey in 2012 of 317 adults aged 40 years or older in the San Francisco region who had used healthcare at least twice in the past 12 months. One-time cross-sectional survey; psychometric evaluation to confirm dimensions and create multi-item scales. Survey data were analyzed using VARCLUS and confirmatory factor analysis and internal consistency reliability testing. Scales measuring five domains were confirmed: coordination within and between care teams, navigation (arranging appointments and visits), communication between specialist and primary care doctor, and communication between primary care doctor and specialist. Four of these demonstrated excellent internal consistency reliability. Mean scale scores indicated low levels of integration. These scales measuring integrated care capture meaningful domains of patients' experiences of health care. The low levels of care integration reported by patients in the study sample suggest that these types of measures should be considered in ongoing evaluations of health system performance and improvement. Further research should examine whether differences in patient experience of integrated care are associated with differences in the processes and outcomes of care received.

  1. An integrated gait rehabilitation training based on Functional Electrical Stimulation cycling and overground robotic exoskeleton in complete spinal cord injury patients: Preliminary results.

    PubMed

    Mazzoleni, S; Battini, E; Rustici, A; Stampacchia, G

    2017-07-01

    The aim of this study is to investigate the effects of an integrated gait rehabilitation training based on Functional Electrical Stimulation (FES)-cycling and overground robotic exoskeleton in a group of seven complete spinal cord injury patients on spasticity and patient-robot interaction. They underwent a robot-assisted rehabilitation training based on two phases: n=20 sessions of FES-cycling followed by n= 20 sessions of robot-assisted gait training based on an overground robotic exoskeleton. The following clinical outcome measures were used: Modified Ashworth Scale (MAS), Numerical Rating Scale (NRS) on spasticity, Penn Spasm Frequency Scale (PSFS), Spinal Cord Independence Measure Scale (SCIM), NRS on pain and International Spinal Cord Injury Pain Data Set (ISCI). Clinical outcome measures were assessed before (T0) after (T1) the FES-cycling training and after (T2) the powered overground gait training. The ability to walk when using exoskeleton was assessed by means of 10 Meter Walk Test (10MWT), 6 Minute Walk Test (6MWT), Timed Up and Go test (TUG), standing time, walking time and number of steps. Statistically significant changes were found on the MAS score, NRS-spasticity, 6MWT, TUG, standing time and number of steps. The preliminary results of this study show that an integrated gait rehabilitation training based on FES-cycling and overground robotic exoskeleton in complete SCI patients can provide a significant reduction of spasticity and improvements in terms of patient-robot interaction.

  2. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  3. An integrative neuroscience model of "significance" processing.

    PubMed

    Williams, Leanne M

    2006-03-01

    The Gordon [37-40] framework of Integrative Neuroscience is used to develop a continuum model for understanding the central role of motivationally-determined "significance" in organizing human information processing. Significance is defined as the property which gives a stimulus relevance to our core motivation to minimize danger and maximize pleasure. Within this framework, the areas of cognition and emotion, theories of motivational arousal and orienting, and the current understanding of neural systems are brought together. The basis of integration is a temporal continuum in which significance processing extends from the most rapid millisecond time scale of automatic, nonconscious mechanisms to the time scale of seconds, in which memory is shaped, to the controlled and conscious mechanisms unfolding over minutes. Over this continuum, significant stimuli are associated with a spectrum of defensive (or consumptive) behaviors through to volitional regulatory behaviors for danger (versus pleasure) and associated brainstem, limbic, medial forebrain bundle and prefrontal circuits, all of which reflect a balance of excitatory (predominant at rapid time scales) to inhibitory mechanisms. Across the lifespan, the negative and positive outcomes of significance processing, coupled with constitutional and genetic factors, will contribute to plasticity, shaping individual adaptations and maladaptions in the balance of excitatory-inhibitory mechanisms.

  4. Generating and controlling homogeneous air turbulence using random jet arrays

    NASA Astrophysics Data System (ADS)

    Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo

    2016-12-01

    The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.

  5. Wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible biomedical platform.

    PubMed

    Maeng, Jimin; Meng, Chuizhou; Irazoqui, Pedro P

    2015-02-01

    We present wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible parylene platform, as progress toward sustainably powering biomedical microsystems suitable for implantable and wearable applications. All-solid-state, low-profile (<30 μm), and high-density (up to ~500 μF/mm(2)) micro-supercapacitors are formed on an ultrathin (~20 μm) freestanding parylene film by a wafer-scale parylene packaging process in combination with a polyaniline (PANI) nanowire growth technique assisted by surface plasma treatment. These micro-supercapacitors are highly flexible and shown to be resilient toward flexural stress. Further, direct integration of micro-supercapacitors into a radio frequency (RF) rectifying circuit is achieved on a single parylene platform, yielding a complete RF energy harvesting microsystem. The system discharging rate is shown to improve by ~17 times in the presence of the integrated micro-supercapacitors. This result suggests that the integrated micro-supercapacitor technology described herein is a promising strategy for sustainably powering biomedical microsystems dedicated to implantable and wearable applications.

  6. Scaling prospects in mechanical energy harvesting with piezo nanowires

    NASA Astrophysics Data System (ADS)

    Ardila, Gustavo; Hinchet, Ronan; Mouis, Mireille; Montès, Laurent

    2013-07-01

    The combination of 3D processing technologies, low power circuits and new materials integration makes it conceivable to build autonomous integrated systems, which would harvest their energy from the environment. In this paper, we focus on mechanical energy harvesting and discuss its scaling prospects toward the use of piezoelectric nanostructures, able to be integrated in a CMOS environment. It is shown that direct scaling of present MEMS-based methodologies would be beneficial for high-frequency applications only. For the range of applications which is presently foreseen, a different approach is needed, based on energy harvesting from direct real-time deformation instead of energy harvesting from vibration modes at or close to resonance. We discuss the prospects of such an approach based on simple scaling rules Contribution to the Topical Issue “International Semiconductor Conference Dresden-Grenoble - ISCDG 2012”, Edited by Gérard Ghibaudo, Francis Balestra and Simon Deleonibus.

  7. The revolution in data gathering systems

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.

  8. Interferometric Imaging of Geostationary Satellites: Signal-to-Noise Considerations

    DTIC Science & Technology

    2011-09-01

    instrument a minute time -scale snapshot imager. Snapshot imaging is im- portant because it allows for resolving short time -scale changes of the satellite ...curves of fringe amplitude standard deviation as a function of satellite V-magnitude, giving the corresponding integration time . From this figure we can...combiner (in R-band). We conclude that it is possible to track fringes on typical highly resolved satellites to a magnitude of V = 14.5. This range

  9. Time-symmetric integration in astrophysics

    NASA Astrophysics Data System (ADS)

    Hernandez, David M.; Bertschinger, Edmund

    2018-04-01

    Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.

  10. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  11. Biological production models as elements of coupled, atmosphere-ocean models for climate research

    NASA Technical Reports Server (NTRS)

    Platt, Trevor; Sathyendranath, Shubha

    1991-01-01

    Process models of phytoplankton production are discussed with respect to their suitability for incorporation into global-scale numerical ocean circulation models. Exact solutions are given for integrals over the mixed layer and the day of analytic, wavelength-independent models of primary production. Within this class of model, the bias incurred by using a triangular approximation (rather than a sinusoidal one) to the variation of surface irradiance through the day is computed. Efficient computation algorithms are given for the nonspectral models. More exact calculations require a spectrally sensitive treatment. Such models exist but must be integrated numerically over depth and time. For these integrations, resolution in wavelength, depth, and time are considered and recommendations made for efficient computation. The extrapolation of the one-(spatial)-dimension treatment to large horizontal scale is discussed.

  12. Metrology with Weak Value Amplification and Related Topics

    DTIC Science & Technology

    2013-10-12

    sensitivity depend crucially on the relative time scales involved, which include: 4 +- PBS PC HWP SBC Piezo Pulsed Laser Split Detector 50:50 FIG. 1. Simple...reasons why this may be impossible or inadvisable given a laboratory set-up. There may be a minimum quiet time between laser pulses, for example, or...measurements is a full 100 ms, our filtering limits the laser noise to time scales of about 30 ms. For analysis, we take this as our integration time in

  13. Parametric study of statistical bias in laser Doppler velocimetry

    NASA Technical Reports Server (NTRS)

    Gould, Richard D.; Stevenson, Warren H.; Thompson, H. Doyle

    1989-01-01

    Analytical studies have often assumed that LDV velocity bias depends on turbulence intensity in conjunction with one or more characteristic time scales, such as the time between validated signals, the time between data samples, and the integral turbulence time-scale. These parameters are presently varied independently, in an effort to quantify the biasing effect. Neither of the post facto correction methods employed is entirely accurate. The mean velocity bias error is found to be nearly independent of data validation rate.

  14. ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications

    NASA Astrophysics Data System (ADS)

    Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.

    2003-12-01

    The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.

  15. Community integration following multidisciplinary rehabilitation for traumatic brain injury.

    PubMed

    Goranson, Tamara E; Graves, Roger E; Allison, Deborah; La Freniere, Ron

    2003-09-01

    To determine the extent to which participation in a multidisciplinary rehabilitation programme and patient characteristics predict improvement in community integration following mild-to-moderate traumatic brain injury (TBI). A non-randomized case-control study was conducted employing a pre-test-post-test multiple regression design. Archival data for 42 patients with mild-to-moderate TBI who completed the Community Integration Questionnaire (CIQ) at intake and again 6-18 months later were analysed. Half the sample participated in an intensive outpatient rehabilitation programme that provided multi-modal interventions, while the other half received no rehabilitation. The two groups were matched on age, education and time since injury. On the CIQ Home Integration scale, participation in rehabilitation and female gender predicted better outcome. On the Productivity scale, patients with a lower age at injury had better outcome. Outcome on both of these scales, as well as on the Social Integration scale, was predicted by the baseline pre-test score (initial severity). Overall, multidisciplinary rehabilitation appeared to increase personal independence. It is also concluded that: (1) multivariate analysis can reveal the relative importance of multiple predictors of outcome; (2) different predictors may predict different aspects of outcome; and (3) more sensitive and specific outcome measures are needed.

  16. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  17. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  18. A long time span relativistic precession model of the Earth

    NASA Astrophysics Data System (ADS)

    Tang, Kai; Soffel, Michael H.; Tao, Jin-He; Han, Wen-Biao; Tang, Zheng-Hong

    2015-04-01

    A numerical solution to the Earth's precession in a relativistic framework for a long time span is presented here. We obtain the motion of the solar system in the Barycentric Celestial Reference System by numerical integration with a symplectic integrator. Special Newtonian corrections accounting for tidal dissipation are included in the force model. The part representing Earth's rotation is calculated in the Geocentric Celestial Reference System by integrating the post-Newtonian equations of motion published by Klioner et al. All the main relativistic effects are included following Klioner et al. In particular, we consider several relativistic reference systems with corresponding time scales, scaled constants and parameters. Approximate expressions for Earth's precession in the interval ±1 Myr around J2000.0 are provided. In the interval ±2000 years around J2000.0, the difference compared to the P03 precession theory is only several arcseconds and the results are consistent with other long-term precession theories. Supported by the National Natural Science Foundation of China.

  19. Integrated Modeling of Time Evolving 3D Kinetic MHD Equilibria and NTV Torque

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Grierson, B. A.; Haskey, S. R.; Nazikian, R.; Cui, L.; Smith, S. P.; Meneghini, O.

    2016-10-01

    New analysis tools and integrated modeling of plasma dynamics developed in the OMFIT framework are used to study kinetic MHD equilibria evolution on the transport time scale. The experimentally observed profile dynamics following the application of 3D error fields are described using a new OMFITprofiles workflow that directly addresses the need for rapid and comprehensive analysis of dynamic equilibria for next-step theory validation. The workflow treats all diagnostic data as fundamentally time dependent, provides physics-based manipulations such as ELM phase data selection, and is consistent across multiple machines - including DIII-D and NSTX-U. The seamless integration of tokamak data and simulation is demonstrated by using the self-consistent kinetic EFIT equilibria and profiles as input into 2D particle, momentum and energy transport calculations using TRANSP as well as 3D kinetic MHD equilibrium stability and neoclassical transport modeling using General Perturbed Equilibrium Code (GPEC). The result is a smooth kinetic stability and NTV torque evolution over transport time scales. Work supported by DE-AC02-09CH11466.

  20. Catalytic ignition model in a monolithic reactor with in-depth reaction

    NASA Technical Reports Server (NTRS)

    Tien, Ta-Ching; Tien, James S.

    1990-01-01

    Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.

  1. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  2. Examination of Cross-Scale Coupling During Auroral Events using RENU2 and ISINGLASS Sounding Rocket Data.

    NASA Astrophysics Data System (ADS)

    Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.

    2017-12-01

    The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.

  3. Multirate Particle-in-Cell Time Integration Techniques of Vlasov-Maxwell Equations for Collisionless Kinetic Plasma Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guangye; Chacon, Luis; Knoll, Dana Alan

    2015-07-31

    A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less

  4. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  5. Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.

    PubMed

    Nottale, Laurent; Auffray, Charles

    2008-05-01

    In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential applications for the analysis, engineering and management of physical and biological systems and properties, and the consequences for the organization of transdisciplinary research and the scientific curriculum in the context of the SYSTEMOSCOPE Consortium research and development agenda.

  6. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  7. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  8. Spin diffusion from an inhomogeneous quench in an integrable system.

    PubMed

    Ljubotina, Marko; Žnidarič, Marko; Prosen, Tomaž

    2017-07-13

    Generalized hydrodynamics predicts universal ballistic transport in integrable lattice systems when prepared in generic inhomogeneous initial states. However, the ballistic contribution to transport can vanish in systems with additional discrete symmetries. Here we perform large scale numerical simulations of spin dynamics in the anisotropic Heisenberg XXZ spin 1/2 chain starting from an inhomogeneous mixed initial state which is symmetric with respect to a combination of spin reversal and spatial reflection. In the isotropic and easy-axis regimes we find non-ballistic spin transport which we analyse in detail in terms of scaling exponents of the transported magnetization and scaling profiles of the spin density. While in the easy-axis regime we find accurate evidence of normal diffusion, the spin transport in the isotropic case is clearly super-diffusive, with the scaling exponent very close to 2/3, but with universal scaling dynamics which obeys the diffusion equation in nonlinearly scaled time.

  9. Effects of a prevention program for internet addiction among middle school students in South Korea.

    PubMed

    Yang, Sun-Yi; Kim, Hee-Soon

    2018-05-01

    This study explored the effects of a self-regulatory efficacy improvement program on self-control, self-efficacy, internet addiction, and time spent on the internet among middle school students in South Korea. The program was led by school nurses, and it is integrated self-efficacy and self-regulation promotion strategies based on Bandura's social cognitive theory. A quasi-experimental, nonequivalent, control group, pre-posttest design was used. The participants were 79 middle school students. Measurements included the Self-Control Scale, Self-Efficacy Scale, Internet Addiction Proneness Scale, and an assessment of internet addiction. Self-control and self-efficacy significantly increased and internet addiction and time spent on the internet significantly decreased in the intervention group compared with the control group. A program led by school nurses that integrated and applied self-efficacy and self-regulation intervention strategies proved effective for prevention of students' internet addiction. © 2018 Wiley Periodicals, Inc.

  10. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  11. Grid Research | Grid Modernization | NREL

    Science.gov Websites

    Grid Research Grid Research NREL addresses the challenges of today's electric grid through high researcher in a lab Integrated Devices and Systems Developing and evaluating grid technologies and integrated Controls Developing methods for real-time operations and controls of power systems at any scale Photo of

  12. Nutritional Systems Biology Modeling: From Molecular Mechanisms to Physiology

    PubMed Central

    de Graaf, Albert A.; Freidig, Andreas P.; De Roos, Baukje; Jamshidi, Neema; Heinemann, Matthias; Rullmann, Johan A.C.; Hall, Kevin D.; Adiels, Martin; van Ommen, Ben

    2009-01-01

    The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today's important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales. PMID:19956660

  13. Time scale bias in erosion rates of glaciated landscapes

    PubMed Central

    Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P.; Fischer, Woodward W.; Avouac, Jean-Philippe

    2016-01-01

    Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time. PMID:27713925

  14. Time scale bias in erosion rates of glaciated landscapes.

    PubMed

    Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P; Fischer, Woodward W; Avouac, Jean-Philippe

    2016-10-01

    Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time.

  15. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    NASA Astrophysics Data System (ADS)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  16. Ultrastable assembly and integration technology for ground- and space-based optical systems.

    PubMed

    Ressel, Simon; Gohlke, Martin; Rauen, Dominik; Schuldt, Thilo; Kronast, Wolfgang; Mescheder, Ulrich; Johann, Ulrich; Weise, Dennis; Braxmaier, Claus

    2010-08-01

    Optical metrology systems crucially rely on the dimensional stability of the optical path between their individual optical components. We present in this paper a novel adhesive bonding technology for setup of quasi-monolithic systems and compare selected characteristics to the well-established state-of-the-art technique of hydroxide-catalysis bonding. It is demonstrated that within the measurement resolution of our ultraprecise custom heterodyne interferometer, both techniques achieve an equivalent passive path length and tilt stability for time scales between 0.1 mHz and 1 Hz. Furthermore, the robustness of the adhesive bonds against mechanical and thermal inputs has been tested, making this new bonding technique in particular a potential option for interferometric applications in future space missions. The integration process itself is eased by long time scales for alignment, as well as short curing times.

  17. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  18. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  19. Foster home integration as a temporal indicator of relational well-being.

    PubMed

    Waid, Jeffrey; Kothari, Brianne H; McBeath, Bowen M; Bank, Lew

    2017-12-01

    This study sought to identify factors that contribute to the relational well-being of youth in substitute care. Using data from the [BLIND] study, youth responded to a 9-item measure of positive home integration, a scale designed to assess the relational experiences of youth to their caregivers and their integration into the foster home. Data were collected from youth in six month intervals, for an 18-month period of time. Latent growth curve modeling procedures were employed to determine if child, family, and case characteristics influenced youth's home integration trajectories. Results suggest stability in youth reports of home integration over time; however, children who were older at the time of study enrollment and youth who experienced placement changes during the period of observation experienced decreased home integration during the 18-month period. Results suggest youth's perspectives of home integration may in part be a function of the child's developmental stage and their experiences with foster care placement instability. Implications for practice and future research are discussed.

  20. Scaled Runge-Kutta algorithms for handling dense output

    NASA Technical Reports Server (NTRS)

    Horn, M. K.

    1981-01-01

    Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.

  1. Two-machine flow shop scheduling integrated with preventive maintenance planning

    NASA Astrophysics Data System (ADS)

    Wang, Shijin; Liu, Ming

    2016-02-01

    This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.

  2. TEMPORAL VARIABILITY MEASUREMENT OF SPECIFIC VOLATILE ORGANIC COMPOUNDS

    EPA Science Inventory

    Methodology was developed to determine unambiguously trace levels of volatile organic compounds as they vary in concentration over a variety of time scales. his capability is important because volatile organic compounds (VOCs) are usually measure by time-integrative techniques th...

  3. Pair plasma relaxation time scales.

    PubMed

    Aksenov, A G; Ruffini, R; Vereshchagin, G V

    2010-04-01

    By numerically solving the relativistic Boltzmann equations, we compute the time scale for relaxation to thermal equilibrium for an optically thick electron-positron plasma with baryon loading. We focus on the time scales of electromagnetic interactions. The collisional integrals are obtained directly from the corresponding QED matrix elements. Thermalization time scales are computed for a wide range of values of both the total-energy density (over 10 orders of magnitude) and of the baryonic loading parameter (over 6 orders of magnitude). This also allows us to study such interesting limiting cases as the almost purely electron-positron plasma or electron-proton plasma as well as intermediate cases. These results appear to be important both for laboratory experiments aimed at generating optically thick pair plasmas as well as for astrophysical models in which electron-positron pair plasmas play a relevant role.

  4. Integrated landscape/hydrologic modeling tool for semiarid watersheds

    Treesearch

    Mariano Hernandez; Scott N. Miller

    2000-01-01

    An integrated hydrologic modeling/watershed assessment tool is being developed to aid in determining the susceptibility of semiarid landscapes to natural and human-induced changes across a range of scales. Watershed processes are by definition spatially distributed and are highly variable through time, and this approach is designed to account for their spatial and...

  5. A design for integration.

    PubMed

    Fenna, D

    1977-09-01

    For nearly two decades, the development of computerized information systems has struggled for acceptable compromises between the unattainable "total system" and the unacceptable separate applications. Integration of related applications is essential if the computer is to be exploited fully, yet relative simplicity is necessary for systems to be implemented in a reasonable time-scale. This paper discusses a system being progressively developed from minimal beginnings but which, from the outset, had a highly flexible and fully integrated system basis. The system is for batch processing, but can accommodate on-line data input; it is similar in its approach to many transaction-processing real-time systems.

  6. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  7. Velocity space scattering coefficients with applications in antihydrogen recombination studies

    NASA Astrophysics Data System (ADS)

    Chang, Yongbin; Ordonez, C. A.

    2000-12-01

    An approach for calculating velocity space friction and diffusion coefficients with Maxwellian field particles is developed based on a kernel function derived in a previous paper [Y. Chang and C. A. Ordonez, Phys. Plasmas 6, 2947 (1999)]. The original fivefold integral expressions for the coefficients are reduced to onefold integrals, which can be used for any value of the Coulomb logarithm. The onefold integrals can be further reduced to standard analytical expressions by using a weak coupling approximation. The integral expression for the friction coefficient is used to predict a time scale that describes the rate at which a reflecting antiproton beam slows down within a positron plasma, while both species are simultaneously confined by a nested Penning trap. The time scale is used to consider the possibility of achieving antihydrogen recombination within the trap. The friction and diffusion coefficients are then used to derive an expression for calculating the energy transfer rate between antiprotons and positrons. The expression is employed to illustrate achieving antihydrogen recombination while taking into account positron heating by the antiprotons. The effect of the presence of an electric field on recombination is discussed.

  8. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology.

    PubMed

    Price, S A; Schmitz, L

    2016-04-05

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. © 2016 The Author(s).

  9. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology

    PubMed Central

    Schmitz, L.

    2016-01-01

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. PMID:26977068

  10. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  11. Global scale diagnoses of FGGE data

    NASA Technical Reports Server (NTRS)

    Paegle, J.

    1985-01-01

    Descriptive global scale diagnoses of the First Global Atmospheric Research Experiment SOP-1 analyses were made and compared against controlled, real data integrations of the Goddard Laboratory of Atmospheric Science (GLAS) general circulation model (GCM) as well as other data sets. The effects of critical latitudes were studied; the influence of tropical wind data and latent heating upon the GLAS GCM was diagnosed; planetary wave structure on various time scales from the diurnal to the monthly was studied; and the GLAS analyses were compared with other analyses. Short term controlled GLAS GCM integrations show that: (1) the inclusion of tropical wind data in real data integrations has an important influence in the mid-latitude prediction in both hemispheres; and (2) the tropical divergent wind reacts almost immediately to alteration of the tropical latent heating. The presence or absence of zonally averaged easterlies depends strongly upon the presence of tropical latent heating.

  12. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less

  13. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  14. Hypoxia-elicited impairment of cell wall integrity, glycosylation precursor synthesis, and growth in scaled-up high-cell density fed-batch cultures of Saccharomyces cerevisiae.

    PubMed

    Aon, Juan C; Sun, Jianxin; Leighton, Julie M; Appelbaum, Edward R

    2016-08-15

    In this study we examine the integrity of the cell wall during scale up of a yeast fermentation process from laboratory scale (10 L) to industrial scale (10,000 L). In a previous study we observed a clear difference in the volume fraction occupied by yeast cells as revealed by wet cell weight (WCW) measurements between these scales. That study also included metabolite analysis which suggested hypoxia during scale up. Here we hypothesize that hypoxia weakens the yeast cell wall during the scale up, leading to changes in cell permeability, and/or cell mechanical resistance, which in turn may lead to the observed difference in WCW. We tested the cell wall integrity by probing the cell wall sensitivity to Zymolyase. Also exometabolomics data showed changes in supply of precursors for the glycosylation pathway. The results show a more sensitive cell wall later in the production process at industrial scale, while the sensitivity at early time points was similar at both scales. We also report exometabolomics data, in particular a link with the protein glycosylation pathway. Significantly lower levels of Man6P and progressively higher GDP-mannose indicated partially impaired incorporation of this sugar nucleotide during co- or post-translational protein glycosylation pathways at the 10,000 L compared to the 10 L scale. This impairment in glycosylation would be expected to affect cell wall integrity. Although cell viability from samples obtained at both scales were similar, cells harvested from 10 L bioreactors were able to re-initiate growth faster in fresh shake flask media than those harvested from the industrial scale. The results obtained help explain the WCW differences observed at both scales by hypoxia-triggered weakening of the yeast cell wall during the scale up.

  15. Nonlinear stratospheric variability: multifractal de-trended fluctuation analysis and singularity spectra

    PubMed Central

    Domeisen, Daniela I. V.

    2016-01-01

    Characterizing the stratosphere as a turbulent system, temporal fluctuations often show different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. In this study, the different scaling laws in the long-term stratospheric variability are studied using multifractal de-trended fluctuation analysis (MF-DFA). The analysis is performed comparing four re-analysis products and different realizations of an idealized numerical model, isolating the role of topographic forcing and seasonal variability, as well as the absence of climate teleconnections and small-scale forcing. The Northern Hemisphere (NH) shows a transition of scaling exponents for time scales shorter than about 1 year, for which the variability is multifractal and scales in time with a power law corresponding to a red spectrum, to longer time scales, for which the variability is monofractal and scales in time with a power law corresponding to white noise. Southern Hemisphere (SH) variability also shows a transition at annual scales. The SH also shows a narrower dynamical range in multifractality than the NH, as seen in the generalized Hurst exponent and in the singularity spectra. The numerical integrations show that the models are able to reproduce the low-frequency variability but are not able to fully capture the shorter term variability of the stratosphere. PMID:27493560

  16. SSI/MSI/LSI/VLSI/ULSI.

    ERIC Educational Resources Information Center

    Alexander, George

    1984-01-01

    Discusses small-scale integrated (SSI), medium-scale integrated (MSI), large-scale integrated (LSI), very large-scale integrated (VLSI), and ultra large-scale integrated (ULSI) chips. The development and properties of these chips, uses of gallium arsenide, Josephson devices (two superconducting strips sandwiching a thin insulator), and future…

  17. Longitudinal Cross-Gender Factorial Invariance of the Academic Motivation Scale

    ERIC Educational Resources Information Center

    Grouzet, Frederick M. E.; Otis, Nancy; Pelletier, Luc G.

    2006-01-01

    This study examined the measurement and latent construct invariance of the Academic Motivation Scale (Vallerand, Blais, Brier, & Pelletier, 1989; Vallerand et al., 1992, 1993) across both gender and time. An integrative analytical strategy was used to assess in one set of nested models both longitudinal and cross-gender invariance, and…

  18. COMPARABILITY OF A REGIONAL AND STATE SURVEY: EFFECTS ON FISH IBI ASSESSMENT FOR WEST VIRGINIA, U.S.A.

    EPA Science Inventory

    The comparability of different survey designs needs to be established to facilitate integration of data across scales and interpretation of trends over time. Probability-based survey designs are now being investigated to allow condition to be assessed at the watershed scale, an...

  19. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE PAGES

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    2018-05-01

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  20. Scalable Preconditioners for Structure Preserving Discretizations of Maxwell Equations in First Order Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.

    Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less

  1. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  2. Computational modes and the Machenauer N.L.N.M.I. of the GLAS 4th order model. [NonLinear Normal Mode Initialization in numerical weather forecasting

    NASA Technical Reports Server (NTRS)

    Navon, I. M.; Bloom, S.; Takacs, L. L.

    1985-01-01

    An attempt was made to use the GLAS global 4th order shallow water equations to perform a Machenhauer nonlinear normal mode initialization (NLNMI) for the external vertical mode. A new algorithm was defined for identifying and filtering out computational modes which affect the convergence of the Machenhauer iterative procedure. The computational modes and zonal waves were linearly initialized and gravitational modes were nonlinearly initialized. The Machenhauer NLNMI was insensitive to the absence of high zonal wave numbers. The effects of the Machenhauer scheme were evaluated by performing 24 hr integrations with nondissipative and dissipative explicit time integration models. The NLNMI was found to be inferior to the Rasch (1984) pseudo-secant technique for obtaining convergence when the time scales of nonlinear forcing were much smaller than the time scales expected from the natural frequency of the mode.

  3. Observation of Prethermalization in Long-Range Interacting Spin Chains (Open Access, Author’s Manuscript)

    DTIC Science & Technology

    2016-08-03

    instance, quantum systems that are near-integrable usually fail to thermalize in an experimentally realistic time scale and, instead, relax to quasi ...However, it is possible to observe quasi -stationary states, often called prethermal, that emerge within an experimentally accessible time scale. Previous...generalized Gibbs ensemble (GGE) [10–13]. Here we experimentally study the relaxation dynamics of a chain of up to 22 spins evolving under a long-range

  4. Integrative monitoring of water storage variations at the landscape-scale with an iGrav superconducting gravimeter in a field enclosure

    NASA Astrophysics Data System (ADS)

    Guntner, A.; Reich, M.; Mikolaj, M.; Creutzfeldt, B.; Schroeder, S.; Wziontek, H.

    2017-12-01

    In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and related storage dynamics beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. We present the first outdoor deployment of an iGrav superconducting gravimeter (SG) in a minimized field enclosure on a wet-temperate grassland site for integrative monitoring of water storage changes. It is shown that the system performs similarly precise as SGs that have hitherto been deployed in observatory buildings, but with higher sensitivity to hydrological variations in the surroundings of the instrument. Gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur, and thus the field SG system directly observes the total water storage change in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily time scales. With about 99% and 85% of the gravity signal originating within a radius of 4000 and 200 meter around the instrument, respectively, the setup paves the road towards gravimetry as a continuous hydrological field monitoring technique for water storage dynamics at the landscape scale.

  5. Characteristic Time Scales of Characteristic Magmatic Processes and Systems

    NASA Astrophysics Data System (ADS)

    Marsh, B. D.

    2004-05-01

    Every specific magmatic process, regardless of spatial scale, has an associated characteristic time scale. Time scales associated with crystals alone are rates of growth, dissolution, settling, aggregation, annealing, and nucleation, among others. At the other extreme are the time scales associated with the dynamics of the entire magmatic system. These can be separated into two groups: those associated with system genetics (e.g., the production and transport of magma, establishment of the magmatic system) and those due to physical characteristics of the established system (e.g., wall rock failure, solidification front propagation and instability, porous flow). The detailed geometry of a specific magmatic system is particularly important to appreciate; although generic systems are useful, care must be taken to make model systems as absolutely realistic as possible. Fuzzy models produce fuzzy science. Knowledge of specific time scales is not necessarily useful or meaningful unless the hierarchical context of the time scales for a realistic magmatic system is appreciated. The age of a specific phenocryst or ensemble of phenocrysts, as determined from isotopic or CSD studies, is not meaningful unless something can be ascertained of the provenance of the crystals. For example, crystal size multiplied by growth rate gives a meaningful crystal age only if it is from a part of the system that has experienced semi-monotonic cooling prior to chilling; crystals entrained from a long-standing cumulate bed that were mechanically sorted in ascending magma may not reveal this history. Ragged old crystals rolling about in the system for untold numbers of flushing times record specious process times, telling more about the noise in the system than the life of typical, first generation crystallization processes. The most helpful process-related time scales are those that are known well and that bound or define the temporal style of the system. Perhaps the most valuable of these times comes from the observed durations and rates of volcanism. There can be little doubt that the temporal styles of volcanism are the same as those of magmatism in general. Volcano repose times, periodicity, eruptive fluxes, acoustic emission structures, lava volumes, longevity, etc. must also be characteristic of pluton-dominated systems. We must therefore give up some classical concepts (e.g., instantaneous injection of crystal-free magma as an initial condition) for any plutonic/chambered system and move towards an integrated concept of magmatism. Among the host of process-related time scales, probably the three most fundamental of any magmatic system are (1) the time scale associated with crystal nucleation (J) and growth (G) (tx}=C{1(G3 J)-{1}/4; Zieg & Marsh, J. Pet. 02') along with the associated scales for mean crystal size (L) and population (N), (2) the time scale associated with conductive cooling controlled by a local length scale (d) (tc}=C{2 d2/K; K is thermal diffusivity), and (3) the time scale associated with intra-crystal diffusion (td}=C{3 L2/D; D is chemical diffusivity). It is the subtle, clever, and insightful application of time scales, dovetailed with realistic system geometry and attention paid to the analogous time scales of volcanism, that promises to reveal the true dynamic integration of magmatic systems.

  6. Multigrid methods with space–time concurrency

    DOE PAGES

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...

    2017-10-06

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  7. Multigrid methods with space–time concurrency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.

    Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less

  8. An innovative large scale integration of silicon nanowire-based field effect transistors

    NASA Astrophysics Data System (ADS)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  9. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  10. Simple photometer circuits using modular electronic components

    NASA Technical Reports Server (NTRS)

    Wampler, J. E.

    1975-01-01

    Operational and peak holding amplifiers are discussed as useful circuits for bioluminescence assays. Circuit diagrams are provided. While analog methods can give a good integration on short time scales, digital methods were found best for long term integration in bioluminescence assays. Power supplies, a general photometer circuit with ratio capability, and variations in the basic photometer design are also considered.

  11. On generic obstructions to recovering correct statistics from climate simulations: Homogenization for deterministic maps and multiplicative noise

    NASA Astrophysics Data System (ADS)

    Gottwald, Georg; Melbourne, Ian

    2013-04-01

    Whereas diffusion limits of stochastic multi-scale systems have a long and successful history, the case of constructing stochastic parametrizations of chaotic deterministic systems has been much less studied. We present rigorous results of convergence of a chaotic slow-fast system to a stochastic differential equation with multiplicative noise. Furthermore we present rigorous results for chaotic slow-fast maps, occurring as numerical discretizations of continuous time systems. This raises the issue of how to interpret certain stochastic integrals; surprisingly the resulting integrals of the stochastic limit system are generically neither of Stratonovich nor of Ito type in the case of maps. It is shown that the limit system of a numerical discretisation is different to the associated continuous time system. This has important consequences when interpreting the statistics of long time simulations of multi-scale systems - they may be very different to the one of the original continuous time system which we set out to study.

  12. Integrated tokamak modeling: when physics informs engineering and research planning

    NASA Astrophysics Data System (ADS)

    Poli, Francesca

    2017-10-01

    Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.

  13. Wheels within Wheels: Hamiltonian Dynamics as a Hierarchy of Action Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, Rory J.; Bellan, Paul M.

    2010-09-17

    In systems where one coordinate undergoes periodic oscillation, the net displacement in any other coordinate over a single period is shown to be given by differentiation of the action integral associated with the oscillating coordinate. This result is then used to demonstrate that the action integral acts as a Hamiltonian for slow coordinates providing time is scaled to the 'tick time' of the oscillating coordinate. Numerous examples, including charged particle drifts and relativistic motion, are supplied to illustrate the varied application of these results.

  14. Ten-channel InP-based large-scale photonic integrated transmitter fabricated by SAG technology

    NASA Astrophysics Data System (ADS)

    Zhang, Can; Zhu, Hongliang; Liang, Song; Cui, Xiao; Wang, Huitao; Zhao, Lingjuan; Wang, Wei

    2014-12-01

    A 10-channel InP-based large-scale photonic integrated transmitter was fabricated by selective area growth (SAG) technology combined with butt-joint regrowth (BJR) technology. The SAG technology was utilized to fabricate the electroabsorption modulated distributed feedback (DFB) laser (EML) arrays at the same time. The design of coplanar electrodes for electroabsorption modulator (EAM) was used for the flip-chip bonding package. The lasing wavelength of DFB laser could be tuned by the integrated micro-heater to match the ITU grids, which only needs one electrode pad. The average output power of each channel is 250 μW with an injection current of 200 mA. The static extinction ratios of the EAMs for 10 channels tested are ranged from 15 to 27 dB with a reverse bias of 6 V. The frequencies of 3 dB bandwidth of the chip for each channel are around 14 GHz. The novel design and simple fabrication process show its enormous potential in reducing the cost of large-scale photonic integrated circuit (LS-PIC) transmitter with high chip yields.

  15. Proportionality between Doppler noise and integrated signal path electron density validated by differenced S-X range

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.

  16. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.

  17. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story

    PubMed Central

    Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.

    2011-01-01

    Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912

  18. Scale-dependent Ocean Wave Turbulence

    NASA Technical Reports Server (NTRS)

    Glazman, R. E.

    1995-01-01

    Wave turbulence is a common feature of nonlinear wave motions observed when external forcing acts during a long period of time, resulting in developed spectral cascades of energy, momentum, and other conserved integrals. In the ocean, wave turbulence occurs on various scales from capillary ripples, and those of baroclinic inertia-gravity, to Rossby waves. Oceanic wave motions are discussed.

  19. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  20. Thermal performance curves, phenotypic plasticity, and the time scales of temperature exposure.

    PubMed

    Schulte, Patricia M; Healy, Timothy M; Fangue, Nann A

    2011-11-01

    Thermal performance curves (TPCs) describe the effects of temperature on biological rate processes. Here, we use examples from our work on common killifish (Fundulus heteroclitus) to illustrate some important conceptual issues relating to TPCs in the context of using these curves to predict the responses of organisms to climate change. Phenotypic plasticity has the capacity to alter the shape and position of the TPCs for acute exposures, but these changes can be obscured when rate processes are measured only following chronic exposures. For example, the acute TPC for mitochondrial respiration in killifish is exponential in shape, but this shape changes with acclimation. If respiration rate is measured only at the acclimation temperature, the TPC is linear, concealing the underlying mechanistic complexity at an acute time scale. These issues are particularly problematic when attempting to use TPCs to predict the responses of organisms to temperature change in natural environments. Many TPCs are generated using laboratory exposures to constant temperatures, but temperature fluctuates in the natural environment, and the mechanisms influencing performance at acute and chronic time scales, and the responses of the performance traits at these time scales may be quite different. Unfortunately, our current understanding of the mechanisms underlying the responses of organisms to temperature change is incomplete, particularly with respect to integrating from processes occurring at the level of single proteins up to whole-organism functions across different time scales, which is a challenge for the development of strongly grounded mechanistic models of responses to global climate change. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  1. Marc Pomeroy | NREL

    Science.gov Websites

    sampling of on-stream components of thermochemical biomass conversion in real-time Affiliated Research Integration, Scale-Up, and Piloting Areas of Expertise Analytical sampling of hot gas and vapor phase products

  2. An integrated assessment of location-dependent scaling for microalgae biofuel production facilities

    DOE PAGES

    Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less

  3. Orbital time scale and new C-isotope record for Cenomanian-Turonian boundary stratotype

    NASA Astrophysics Data System (ADS)

    Sageman, Bradley B.; Meyers, Stephen R.; Arthur, Michael A.

    2006-02-01

    Previous time scales for the Cenomanian-Turonian boundary (CTB) interval containing Oceanic Anoxic Event II (OAE II) vary by a factor of three. In this paper we present a new orbital time scale for the CTB stratotype established independently of radiometric, biostratigraphic, or geochemical data sets, update revisions of CTB biostratigraphic zonation, and provide a new detailed carbon isotopic record for the CTB study interval. The orbital time scale allows an independent assessment of basal biozone ages relative to the new CTB date of 93.55 Ma (GTS04). The δ13Corg data document the abrupt onset of OAE II, significant variability in δ13Corg values, and values enriched to almost -22‰. These new data underscore the difficulty in defining OAE II termination. Using the new isotope curve and time scale, estimates of OAE II duration can be determined and exported to other sites based on integration of well-established chemostratigraphic and biostratigraphic datums. The new data will allow more accurate calculations of biogeochemical and paleobiologic rates across the CTB.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrnstein, Aaron R.

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration,more » and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO 2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No dramatic or persistent signs of error growth in the passive tracer outgassing or the ocean circulation are observed to result from AMR.« less

  5. On-chip detection of non-classical light by scalable integration of single-photon detectors

    PubMed Central

    Najafi, Faraz; Mower, Jacob; Harris, Nicholas C.; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K.; Englund, Dirk

    2015-01-01

    Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit—required for scalable quantum photonic circuits—has been limited to <0.2%. Here we introduce a micrometer-scale flip-chip process that enables scalable integration of SNSPDs on a range of photonic circuits. Ten low-jitter detectors are integrated on one circuit with 100% device yield. With an average system detection efficiency beyond 10%, and estimated on-chip detection efficiency of 14–52% for four detectors operated simultaneously, we demonstrate, to the best of our knowledge, the first on-chip photon correlation measurements of non-classical light. PMID:25575346

  6. Estimation of surface heat and moisture fluxes over a prairie grassland. II - Two-dimensional time filtering and site variability

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Smith, Eric A.

    1992-01-01

    The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.

  7. Technical integration of hippocampus, Basal Ganglia and physical models for spatial navigation.

    PubMed

    Fox, Charles; Humphries, Mark; Mitchinson, Ben; Kiss, Tamas; Somogyvari, Zoltan; Prescott, Tony

    2009-01-01

    Computational neuroscience is increasingly moving beyond modeling individual neurons or neural systems to consider the integration of multiple models, often constructed by different research groups. We report on our preliminary technical integration of recent hippocampal formation, basal ganglia and physical environment models, together with visualisation tools, as a case study in the use of Python across the modelling tool-chain. We do not present new modeling results here. The architecture incorporates leaky-integrator and rate-coded neurons, a 3D environment with collision detection and tactile sensors, 3D graphics and 2D plots. We found Python to be a flexible platform, offering a significant reduction in development time, without a corresponding significant increase in execution time. We illustrate this by implementing a part of the model in various alternative languages and coding styles, and comparing their execution times. For very large-scale system integration, communication with other languages and parallel execution may be required, which we demonstrate using the BRAHMS framework's Python bindings.

  8. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  9. The Telecommunications and Data Acquisition Report

    NASA Technical Reports Server (NTRS)

    Posner, E. C. (Editor)

    1989-01-01

    Deep Space Network advanced systems, very large scale integration architecture for decoders, radar interface and control units, microwave time delays, microwave antenna holography, and a radio frequency interference survey are among the topics discussed.

  10. A scale-invariant internal representation of time.

    PubMed

    Shankar, Karthik H; Howard, Marc W

    2012-01-01

    We propose a principled way to construct an internal representation of the temporal stimulus history leading up to the present moment. A set of leaky integrators performs a Laplace transform on the stimulus function, and a linear operator approximates the inversion of the Laplace transform. The result is a representation of stimulus history that retains information about the temporal sequence of stimuli. This procedure naturally represents more recent stimuli more accurately than less recent stimuli; the decrement in accuracy is precisely scale invariant. This procedure also yields time cells that fire at specific latencies following the stimulus with a scale-invariant temporal spread. Combined with a simple associative memory, this representation gives rise to a moment-to-moment prediction that is also scale invariant in time. We propose that this scale-invariant representation of temporal stimulus history could serve as an underlying representation accessible to higher-level behavioral and cognitive mechanisms. In order to illustrate the potential utility of this scale-invariant representation in a variety of fields, we sketch applications using minimal performance functions to problems in classical conditioning, interval timing, scale-invariant learning in autoshaping, and the persistence of the recency effect in episodic memory across timescales.

  11. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    NASA Astrophysics Data System (ADS)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  12. Slow dynamics in translation-invariant quantum lattice models

    NASA Astrophysics Data System (ADS)

    Michailidis, Alexios A.; Žnidarič, Marko; Medvedyeva, Mariya; Abanin, Dmitry A.; Prosen, Tomaž; Papić, Z.

    2018-03-01

    Many-body quantum systems typically display fast dynamics and ballistic spreading of information. Here we address the open problem of how slow the dynamics can be after a generic breaking of integrability by local interactions. We develop a method based on degenerate perturbation theory that reveals slow dynamical regimes and delocalization processes in general translation invariant models, along with accurate estimates of their delocalization time scales. Our results shed light on the fundamental questions of the robustness of quantum integrable systems and the possibility of many-body localization without disorder. As an example, we construct a large class of one-dimensional lattice models where, despite the absence of asymptotic localization, the transient dynamics is exceptionally slow, i.e., the dynamics is indistinguishable from that of many-body localized systems for the system sizes and time scales accessible in experiments and numerical simulations.

  13. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks

    PubMed Central

    Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher

    2018-01-01

    Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040

  14. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.

    PubMed

    Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G

    2018-04-03

    Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.

  15. Alternans Arrhythmias: From Cell to Heart

    PubMed Central

    Weiss, James N.; Nivala, Michael; Garfinkel, Alan; Qu, Zhilin

    2010-01-01

    The goal of systems biology is to relate events at the molecular level to more integrated scales from organelle to cell, tissue and living organism. Here we review how normal and abnormal excitation-contraction (EC) coupling properties emerge from the protein scale, where behaviors are dominated by randomness, to the cell and tissue scales, where heart has to beat with reliable regularity for a life-time. Beginning with the fundamental unit of EC coupling, the couplon where L-type Ca channels in the sarcolemmal membrane adjoin ryanodine receptors in the sarcoplasmic reticulum membrane, we show how a network of couplons with three basic properties (random activation, refractoriness, and recruitment) produces the classical physiological properties of excitation-contraction (EC) coupling and, under pathophysiological conditions, leads to Ca alternans and Ca waves. Moving to the tissue scale, we discuss how cellular Ca alternans and Ca waves promote both reentrant and focal arrhythmias in the heart. Throughout, we emphasize the qualitatively novel properties which emerge at each new scale of integration. PMID:21212392

  16. Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.

    PubMed

    Wirtssohn, Sarah; Ronacher, Bernhard

    2015-04-01

    Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.

  17. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-09-11

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.

  18. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  19. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  20. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  1. Mixed time integration methods for transient thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1983-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.

  2. Synaptic Scaling Enables Dynamically Distinct Short- and Long-Term Memory Formation

    PubMed Central

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Tsodyks, Misha; Wörgötter, Florentin

    2013-01-01

    Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling – a slow process usually associated with the maintenance of activity homeostasis – combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes. PMID:24204240

  3. Synaptic scaling enables dynamically distinct short- and long-term memory formation.

    PubMed

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Tsodyks, Misha; Wörgötter, Florentin

    2013-10-01

    Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.

  4. Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance

    PubMed Central

    Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai

    2016-01-01

    Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by “clock coasting” when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy. PMID:27187399

  5. Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance.

    PubMed

    Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai

    2016-05-12

    Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by "clock coasting" when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy.

  6. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  7. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    DOE PAGES

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; ...

    2017-01-25

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to notmore » only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. Furthermore, the algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.« less

  8. George E. Pake Prize: A Few Challenges in the Evolution of Semiconductor Device/Manufacturing Technology

    NASA Astrophysics Data System (ADS)

    Doering, Robert

    In the early 1980s, the semiconductor industry faced the related challenges of ``scaling through the one-micron barrier'' and converting single-level-metal NMOS integrated circuits to multi-level-metal CMOS. Multiple advances in lithography technology and device materials/process integration led the way toward the deep-sub-micron transistors and interconnects that characterize today's electronic chips. In the 1990s, CMOS scaling advanced at an accelerated pace enabled by rapid advances in many aspects of optical lithography. However, the industry also needed to continue the progress in manufacturing on ever-larger silicon wafers to maintain economy-of-scale trends. Simultaneously, the increasing complexity and absolute-precision requirements of manufacturing compounded the necessity for new processes, tools, and control methodologies. This talk presents a personal perspective on some of the approaches that addressed the aforementioned challenges. In particular, early work on integrating silicides, lightly-doped-drain FETs, shallow recessed isolation, and double-level metal will be discussed. In addition, some pioneering efforts in deep-UV lithography and single-wafer processing will be covered. The latter will be mainly based on results from the MMST Program - a 100 M +, 5-year R&D effort, funded by DARPA, the U.S. Air Force, and Texas Instruments, that developed a wide range of new technologies for advanced semiconductor manufacturing. The major highlight of the program was the demonstration of sub-3-day cycle time for manufacturing 350-nm CMOS integrated circuits in 1993. This was principally enabled by the development of: (1) 100% single-wafer processing, including rapid-thermal processing (RTP), and (2) computer-integrated-manufacturing (CIM), including real-time, in-situ process control.

  9. Optimal Multi-scale Demand-side Management for Continuous Power-Intensive Processes

    NASA Astrophysics Data System (ADS)

    Mitra, Sumit

    With the advent of deregulation in electricity markets and an increasing share of intermittent power generation sources, the profitability of industrial consumers that operate power-intensive processes has become directly linked to the variability in energy prices. Thus, for industrial consumers that are able to adjust to the fluctuations, time-sensitive electricity prices (as part of so-called Demand-Side Management (DSM) in the smart grid) offer potential economical incentives. In this thesis, we introduce optimization models and decomposition strategies for the multi-scale Demand-Side Management of continuous power-intensive processes. On an operational level, we derive a mode formulation for scheduling under time-sensitive electricity prices. The formulation is applied to air separation plants and cement plants to minimize the operating cost. We also describe how a mode formulation can be used for industrial combined heat and power plants that are co-located at integrated chemical sites to increase operating profit by adjusting their steam and electricity production according to their inherent flexibility. Furthermore, a robust optimization formulation is developed to address the uncertainty in electricity prices by accounting for correlations and multiple ranges in the realization of the random variables. On a strategic level, we introduce a multi-scale model that provides an understanding of the value of flexibility of the current plant configuration and the value of additional flexibility in terms of retrofits for Demand-Side Management under product demand uncertainty. The integration of multiple time scales leads to large-scale two-stage stochastic programming problems, for which we need to apply decomposition strategies in order to obtain a good solution within a reasonable amount of time. Hence, we describe two decomposition schemes that can be applied to solve two-stage stochastic programming problems: First, a hybrid bi-level decomposition scheme with novel Lagrangean-type and subset-type cuts to strengthen the relaxation. Second, an enhanced cross-decomposition scheme that integrates Benders decomposition and Lagrangean decomposition on a scenario basis. To demonstrate the effectiveness of our developed methodology, we provide several industrial case studies throughout the thesis.

  10. Developing Flexible, Integrated Hydrologic Modeling Systems for Multiscale Analysis in the Midwest and Great Lakes Region

    NASA Astrophysics Data System (ADS)

    Hamlet, A. F.; Chiu, C. M.; Sharma, A.; Byun, K.; Hanson, Z.

    2016-12-01

    Physically based hydrologic modeling of surface and groundwater resources that can be flexibly and efficiently applied to support water resources policy/planning/management decisions at a wide range of spatial and temporal scales are greatly needed in the Midwest, where stakeholder access to such tools is currently a fundamental barrier to basic climate change assessment and adaptation efforts, and also the co-production of useful products to support detailed decision making. Based on earlier pilot studies in the Pacific Northwest Region, we are currently assembling a suite of end-to-end tools and resources to support various kinds of water resources planning and management applications across the region. One of the key aspects of these integrated tools is that the user community can access gridded products at any point along the end-to-end chain of models, looking backwards in time about 100 years (1915-2015), and forwards in time about 85 years using CMIP5 climate model projections. The integrated model is composed of historical and projected future meteorological data based on station observations and statistical and dynamically downscaled climate model output respectively. These gridded meteorological data sets serve as forcing data for the macro-scale VIC hydrologic model implemented over the Midwest at 1/16 degree resolution. High-resolution climate model (4km WRF) output provides inputs for the analyses of urban impacts, hydrologic extremes, agricultural impacts, and impacts to the Great Lakes. Groundwater recharge estimated by the surface water model provides input data for fine-scale and macro-scale groundwater models needed for specific applications. To highlight the multi-scale use of the integrated models in support of co-production of scientific information for decision making, we briefly describe three current case studies addressing different spatial scales of analysis: 1) Effects of climate change on the water balance of the Great Lakes, 2) Future hydropower resources in the St. Joseph River basin, 3) Effects of climate change on carbon cycling in small lakes in the Northern Highland Lakes District.

  11. Oak Ridge Bio-surveillance Toolkit (ORBiT): Integrating Big-Data Analytics with Visual Analysis for Public Health Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Pullum, Laura L; Steed, Chad A

    In this position paper, we describe the design and implementation of the Oak Ridge Bio-surveillance Toolkit (ORBiT): a collection of novel statistical and machine learning tools implemented for (1) integrating heterogeneous traditional (e.g. emergency room visits, prescription sales data, etc.) and non-traditional (social media such as Twitter and Instagram) data sources, (2) analyzing large-scale datasets and (3) presenting the results from the analytics as a visual interface for the end-user to interact and provide feedback. We present examples of how ORBiT can be used to summarize ex- tremely large-scale datasets effectively and how user interactions can translate into the datamore » analytics process for bio-surveillance. We also present a strategy to estimate parameters relevant to dis- ease spread models from near real time data feeds and show how these estimates can be integrated with disease spread models for large-scale populations. We conclude with a perspective on how integrating data and visual analytics could lead to better forecasting and prediction of disease spread as well as improved awareness of disease susceptible regions.« less

  12. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  13. Quantifying Ecological Memory of Plant and Ecosystem Processes in Variable Environments

    NASA Astrophysics Data System (ADS)

    Ogle, K.; Barron-Gafford, G. A.; Bentley, L.; Cable, J.; Lucas, R.; Huxman, T. E.; Loik, M. E.; Smith, S. D.; Tissue, D.

    2010-12-01

    Precipitation, soil water, and other factors affect plant and ecosystem processes at multiple time scales. A common assumption is that water availability at a given time directly affects processes at that time. Recent work, especially in pulse-driven, semiarid systems, shows that antecedent water availability, averaged over several days to a couple weeks, can be just as or more important than current water status. Precipitation patterns of previous seasons or past years can also impact plant and ecosystem functioning in many systems. However, we lack an analytical framework for quantifying the importance of and time-scale over which past conditions affect current processes. This study explores the ecological memory of a variety of plant and ecosystem processes. We use memory as a metaphor to describe the time-scale over which antecedent conditions affect the current process. Existing approaches for incorporating antecedent effects arbitrarily select the antecedent integration period (e.g., the past 2 weeks) and the relative importance of past conditions (e.g., assign equal or linearly decreasing weights to past events). In contrast, we utilize a hierarchical Bayesian approach to integrate field data with process-based models, yielding posterior distributions for model parameters, including the duration of the ecological memory (integration period) and the relative importance of past events (weights) to this memory. We apply our approach to data spanning diverse temporal scales and four semiarid sites in the western US: leaf-level stomatal conductance (gs, sub-hourly scale), soil respiration (Rs, hourly to daily scale), and net primary productivity (NPP) and tree-ring widths (annual scale). For gs, antecedent factors (daily rainfall and temperature, hourly vapor pressure deficit) and current soil water explained up to 72% of the variation in gs in the Chihuahuan Desert, with a memory of 10 hours for a grass and 4 days for a shrub. Antecedent factors (past soil water, temperature, photosynthesis rates) explained 73-80% of the variation in sub-daily and daily Rs. Rs beneath shrubs had a moisture and temperature memory of a few weeks, while Rs in open space and beneath grasses had a memory of 6 weeks. For pinyon pine ring widths, the current and previous year accounted for 85% of the precipitation memory; for the current year, precipitation received between February and June was most important. A similar result emerged for NPP in the short grass steppe. In both sites, tree growth and NPP had a memory of 3 years such that precipitation received >3 years ago had little influence. Understanding ecosystem dynamics requires knowledge of the temporal scales over which environmental factors influence ecological processes, and our approach to quantifying ecological memory provides a means to identify underlying mechanisms.

  14. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    NASA Astrophysics Data System (ADS)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  15. Kanaka Maoli and Kamáāina Seascapes - Knowing Our Ocean Through Times of Change

    NASA Astrophysics Data System (ADS)

    Puniwai, N.

    2017-12-01

    In Hawaíi our oceans define us, we come from the ocean. Our oceans change, and we change with them, as we always have. By learning from people who are dependent on their environment, we learn how to observe and how to adapt. Through the lens of climate change, we interviewed respected ocean observers and surfers to learn about changes they have witnessed over time and the spatial scales and ocean conditions important to them. We looked at our ancient and historical texts to see what processes they recorded and the language they used to ascribe their observations, interactions and relationships to these places. Yet, we also integrate what our mechanical data sensors have recorded over recent time. By expanding our time scales of reference, knowledge sources, and collaborators, these methods teach us how our ancestors adapted and how climate change may impact our subsistence, recreation, and interactions with the environment. Managing complex seascapes requires the integration of multiple ways of knowing; strengthening our understanding of seascapes and their resiliency in this changing environment.

  16. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  17. Integration of Biological and Physical Sciences to Advance Ecological Understanding of Aquatic Ecosystems

    NASA Astrophysics Data System (ADS)

    Luce, C. H.; Buffington, J. M.; Rieman, B. E.; Dunham, J. B.; McKean, J. A.; Thurow, R. F.; Gutierrez-Teira, B.; Rosenberger, A. E.

    2005-05-01

    Conservation and restoration of freshwater stream and river habitats are important goals for land management and natural resources research. Several examples of research have emerged showing that many species are adapted to temporary habitat disruptions, but that these adaptations are sensitive to the spatial grain and extent of disturbance as well as to its duration. When viewed from this perspective, questions of timing, spatial pattern, and relevant scales emerge as critical issues. In contrast, much regulation, management, and research remains tied to pollutant loading paradigms that are insensitive to either time or space scales. It is becoming clear that research is needed to examine questions and hypotheses about how physical processes affect ecological processes. Two overarching questions concisely frame the scientific issues: 1) How do we quantify physical watershed processes in a way that is meaningful to biological and ecological processes, and 2) how does the answer to that question vary with changing spatial and temporal scales? A joint understanding of scaling characteristics of physical process and the plasticity of aquatic species will be needed to accomplish this research; hence a strong need exists for integrative and collaborative development. Considering conservation biology problems in this fashion can lead to creative and non-obvious solutions because the integrated system has important non-linearities and feedbacks related to a biological system that has responded to substantial natural variability in the past. We propose that research beginning with ecological theories and principles followed with a structured examination of each physical process as related to the specific ecological theories is a strong approach to developing the necessary science, and such an approach may form a basis for development of scaling theories of hydrologic and geomorphic process. We illustrate the approach with several examples.

  18. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  19. Prediction of Autism at 3 Years from Behavioural and Developmental Measures in High-Risk Infants: A Longitudinal Cross-Domain Classifier Analysis

    ERIC Educational Resources Information Center

    Bussu, G.; Jones, E. J. H.; Charman, T.; Johnson, M. H.; Buitelaar, J. K.; Baron-Cohen, S.; Bedford, R.; Bolton, P.; Blasi, A.; Chandler, S.; Cheung, C.; Davies, K.; Elsabbagh, M.; Fernandes, J.; Gammer, I.; Garwood, H.; Gliga, T.; Guiraud, J.; Hudry, K.; Liew, M.; Lloyd-Fox, S.; Maris, H.; O'Hara, L.; Pasco, G.; Pickles, A.; Ribeiro, H.; Salomone, E.; Tucker, L.; Volein, A.

    2018-01-01

    We integrated multiple behavioural and developmental measures from multiple time-points using machine learning to improve early prediction of individual Autism Spectrum Disorder (ASD) outcome. We examined Mullen Scales of Early Learning, Vineland Adaptive Behavior Scales, and early ASD symptoms between 8 and 36 months in high-risk siblings (HR; n…

  20. Efficiency analysis of numerical integrations for finite element substructure in real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Wang, Jinting; Lu, Liqiao; Zhu, Fei

    2018-01-01

    Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.

  1. Effective long wavelength scalar dynamics in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Ian; Rigopoulos, Gerasimos, E-mail: ian.moss@newcastle.ac.uk, E-mail: gerasimos.rigopoulos@ncl.ac.uk

    We discuss the effective infrared theory governing a light scalar's long wavelength dynamics in de Sitter spacetime. We show how the separation of scales around the physical curvature radius k / a ∼ H can be performed consistently with a window function and how short wavelengths can be integrated out in the Schwinger-Keldysh path integral formalism. At leading order, and for time scales Δ t >> H {sup −1}, this results in the well-known Starobinsky stochastic evolution. However, our approach allows for the computation of quantum UV corrections, generating an effective potential on which the stochastic dynamics takes place. Themore » long wavelength stochastic dynamical equations are now second order in time, incorporating temporal scales Δ t ∼ H {sup −1} and resulting in a Kramers equation for the probability distribution—more precisely the Wigner function—in contrast to the more usual Fokker-Planck equation. This feature allows us to non-perturbatively evaluate, within the stochastic formalism, not only expectation values of field correlators, but also the stress-energy tensor of φ.« less

  2. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  3. Impact of integration of sexual and reproductive health services on consultation duration times: results from the Integra Initiative.

    PubMed

    Siapka, Mariana; Obure, Carol Dayo; Mayhew, Susannah H; Sweeney, Sedona; Fenty, Justin; Vassall, Anna

    2017-11-01

    The lack of human resources is a key challenge in scaling up of HIV services in Africa's health care system. Integrating HIV services could potentially increase their effectiveness and optimize the use of limited resources and clinical staff time. We examined the impact of integration of provider initiated HIV counselling and testing (PITC) and family planning (FP counselling and FP provision) services on duration of consultation to assess the impact of PITC and FP integration on staff workload. This study was conducted in 24 health facilities in Kenya under the Integra Initiative, a non-randomized, pre/post intervention trial to evaluate the impact of integrated HIV and sexual and reproductive health services on health and service outcomes. We compared the time spent providing PITC-only services, FP-only services and integrated PITC/FP services. We used log-linear regression to assess the impact of plausible determinants on the duration of clients' consultation times. Median consultation duration times were highest for PITC-only services (30 min), followed by integrated services (10 min) and FP-only services (8 min). Times for PITC-only and FP-only services were 69.7% higher (95% Confidence Intervals (CIs) 35.8-112.0) and 43.9% lower (95% CIs -55.4 to - 29.6) than times spent on these services when delivered as an integrated service, respectively. The reduction in consultation times with integration suggests a potential reduction in workload. The higher consultation time for PITC-only could be because more pre- and post-counselling is provided at these stand-alone services. In integrated PITC/FP services, the duration of the visit fell below that required by HIV testing guidelines, and service mix between counselling and testing substantially changed. Integration of HIV with FP services may compromise the quality of services delivered and care must be taken to clearly specify and monitor appropriate consultation duration times and procedures during the process of integrating HIV and FP services. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.

  4. Impact of integration of sexual and reproductive health services on consultation duration times: results from the Integra Initiative

    PubMed Central

    Siapka, Mariana; Obure, Carol Dayo; Mayhew, Susannah H; Fenty, Justin; Initiative, Integra; Vassall, Anna

    2017-01-01

    Abstract The lack of human resources is a key challenge in scaling up of HIV services in Africa’s health care system. Integrating HIV services could potentially increase their effectiveness and optimize the use of limited resources and clinical staff time. We examined the impact of integration of provider initiated HIV counselling and testing (PITC) and family planning (FP counselling and FP provision) services on duration of consultation to assess the impact of PITC and FP integration on staff workload. This study was conducted in 24 health facilities in Kenya under the Integra Initiative, a non-randomized, pre/post intervention trial to evaluate the impact of integrated HIV and sexual and reproductive health services on health and service outcomes. We compared the time spent providing PITC-only services, FP-only services and integrated PITC/FP services. We used log-linear regression to assess the impact of plausible determinants on the duration of clients’ consultation times. Median consultation duration times were highest for PITC-only services (30 min), followed by integrated services (10 min) and FP-only services (8 min). Times for PITC-only and FP-only services were 69.7% higher (95% Confidence Intervals (CIs) 35.8–112.0) and 43.9% lower (95% CIs −55.4 to − 29.6) than times spent on these services when delivered as an integrated service, respectively. The reduction in consultation times with integration suggests a potential reduction in workload. The higher consultation time for PITC-only could be because more pre- and post-counselling is provided at these stand-alone services. In integrated PITC/FP services, the duration of the visit fell below that required by HIV testing guidelines, and service mix between counselling and testing substantially changed. Integration of HIV with FP services may compromise the quality of services delivered and care must be taken to clearly specify and monitor appropriate consultation duration times and procedures during the process of integrating HIV and FP services. PMID:29194545

  5. Techniques for control of long-term reliability of complex integrated circuits. I - Reliability assurance by test vehicle qualification.

    NASA Technical Reports Server (NTRS)

    Van Vonno, N. W.

    1972-01-01

    Development of an alternate approach to the conventional methods of reliability assurance for large-scale integrated circuits. The product treated is a large-scale T squared L array designed for space applications. The concept used is that of qualification of product by evaluation of the basic processing used in fabricating the product, providing an insight into its potential reliability. Test vehicles are described which enable evaluation of device characteristics, surface condition, and various parameters of the two-level metallization system used. Evaluation of these test vehicles is performed on a lot qualification basis, with the lot consisting of one wafer. Assembled test vehicles are evaluated by high temperature stress at 300 C for short time durations. Stressing at these temperatures provides a rapid method of evaluation and permits a go/no go decision to be made on the wafer lot in a timely fashion.

  6. Scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted cylindrical element

    NASA Astrophysics Data System (ADS)

    Tang, Zhanqi; Jiang, Nan

    2018-05-01

    This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.

  7. Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology

    PubMed Central

    Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted

    2014-01-01

    The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966

  8. An Integrated Knowledge Framework to Characterize and Scaffold Size and Scale Cognition (FS2C)

    NASA Astrophysics Data System (ADS)

    Magana, Alejandra J.; Brophy, Sean P.; Bryan, Lynn A.

    2012-09-01

    Size and scale cognition is a critical ability associated with reasoning with concepts in different disciplines of science, technology, engineering, and mathematics. As such, researchers and educators have identified the need for young learners and their educators to become scale-literate. Informed by developmental psychology literature and recent findings in nanoscale science and engineering education, we propose an integrated knowledge framework for characterizing and scaffolding size and scale cognition called the FS2C framework. Five ad hoc assessment tasks were designed informed by the FS2C framework with the goal of identifying participants' understandings of size and scale. Findings identified participants' difficulties to discern different sizes of microscale and nanoscale objects and a low level of sophistication on identifying scale worlds among participants. Results also identified that as bigger the difference between the sizes of the objects is, the more difficult was for participants to identify how many times an object is bigger or smaller than another one. Similarly, participants showed difficulties to estimate approximate sizes of sub-macroscopic objects as well as a difficulty for participants to estimate the size of very large objects. Participants' accurate location of objects on a logarithmic scale was also challenging.

  9. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  10. Nuclear resonant scattering measurements on (57)Fe by multichannel scaling with a 64-pixel silicon avalanche photodiode linear-array detector.

    PubMed

    Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M

    2014-11-01

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.

  11. Integrated situational awareness for cyber attack detection, analysis, and mitigation

    NASA Astrophysics Data System (ADS)

    Cheng, Yi; Sagduyu, Yalin; Deng, Julia; Li, Jason; Liu, Peng

    2012-06-01

    Real-time cyberspace situational awareness is critical for securing and protecting today's enterprise networks from various cyber threats. When a security incident occurs, network administrators and security analysts need to know what exactly has happened in the network, why it happened, and what actions or countermeasures should be taken to quickly mitigate the potential impacts. In this paper, we propose an integrated cyberspace situational awareness system for efficient cyber attack detection, analysis and mitigation in large-scale enterprise networks. Essentially, a cyberspace common operational picture will be developed, which is a multi-layer graphical model and can efficiently capture and represent the statuses, relationships, and interdependencies of various entities and elements within and among different levels of a network. Once shared among authorized users, this cyberspace common operational picture can provide an integrated view of the logical, physical, and cyber domains, and a unique visualization of disparate data sets to support decision makers. In addition, advanced analyses, such as Bayesian Network analysis, will be explored to address the information uncertainty, dynamic and complex cyber attack detection, and optimal impact mitigation issues. All the developed technologies will be further integrated into an automatic software toolkit to achieve near real-time cyberspace situational awareness and impact mitigation in large-scale computer networks.

  12. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  13. Integrating Phenological, Trait and Environmental Data For Continental Scale Analysis: A Community Approach

    NASA Astrophysics Data System (ADS)

    Weltzin, J. F.; Walls, R.; Guralnick, R. P.; Rosemartin, A.; Deck, J.; Powers, L. A.

    2014-12-01

    There is a wealth of biodiversity and environmental data that can provide the basis for addressing global scale questions of societal concern. However, our ability to discover, access and integrate these data for use in broader analyses is hampered by the lack of standardized languages and systems. New tools (e.g. ontologies, data standards, integration tools, unique identifiers) are being developed that enable establishment of a framework for linked and open data. Relative to other domains, these tools are nascent in biodiversity and environmental sciences and will require effort to develop, though work can capitalize on lessons learned from previous efforts. Here we discuss needed next steps to provide consistently described and formatted ecological data for immediate application in ecological analysis, focusing on integrating phenology, trait and environmental data to understand local to continental-scale biophysical processes and inform natural resource management practices. As more sources of data become available at finer spatial and temporal resolution, e.g., from national standardized earth observing systems (e.g., NEON, LTER and LTAR Networks, USA NPN), these challenges will become more acute. Here we provide an overview of the standards and ontology development landscape specifically related to phenological and trait data, and identify requirements to overcome current challenges. Second, we outline a workflow for formatting and integrating existing datasets to address key scientific and resource management questions such as: "What traits determine differential phenological responses to changing environmental conditions?" or "What is the role of granularity of observation, and of spatiotemporal scale, in controlling phenological responses to different driving variables?" Third, we discuss methods to semantically annotate datasets to greatly decrease time needed to assemble heterogeneous data for use in ecological analyses on varying spatial scales. We close by making a call to interested community members for a working group to model phenology, trait and environmental data products from continental-scale efforts (e.g. NEON, USA-NPN and others) focusing on ways to assure discoverability and interoperability.

  14. The NASA Energy and Water Cycle Extreme (NEWSE) Integration Project

    NASA Technical Reports Server (NTRS)

    House, P. R.; Lapenta, W.; Schiffer, R.

    2008-01-01

    Skillful predictions of water and energy cycle extremes (flood and drought) are elusive. To better understand the mechanisms responsible for water and energy extremes, and to make decisive progress in predicting these extremes, the collaborative NASA Energy and Water cycle Extremes (NEWSE) Integration Project, is studying these extremes in the U.S. Southern Great Plains (SGP) during 2006-2007, including their relationships with continental and global scale processes, and assessment of their predictability on multiple space and time scales. It is our hypothesis that an integrative analysis of observed extremes which reflects the current understanding of the role of SST and soil moisture variability influences on atmospheric heating and forcing of planetary waves, incorporating recently available global and regional hydro- meteorological datasets (i.e., precipitation, water vapor, clouds, etc.) in conjunction with advances in data assimilation, can lead to new insights into the factors that lead to persistent drought and flooding. We will show initial results of this project, whose goals are to provide an improved definition, attribution and prediction on sub-seasonal to interannual time scales, improved understanding of the mechanisms of decadal drought and its predictability, including the impacts of SST variability and deep soil moisture variability, and improved monitoring/attributions, with transition to applications; a bridging of the gap between hydrological forecasts and stakeholders (utilization of probabilistic forecasts, education, forecast interpretation for different sectors, assessment of uncertainties for different sectors, etc.).

  15. Landscape-scale water balance monitoring with an iGrav superconducting gravimeter in a field enclosure

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas; Reich, Marvin; Mikolaj, Michal; Creutzfeldt, Benjamin; Schroeder, Stephan; Wziontek, Hartmut

    2017-04-01

    In spite of the fundamental role of the landscape water balance for the Earth's water and energy cycles, monitoring the water balance and its components beyond the point scale is notoriously difficult due to the multitude of flow and storage processes and their spatial heterogeneity. Here, we present the first deployment of an iGrav superconducting gravimeter (SG) in a minimized field enclosure on a grassland site for integrative monitoring of water storage changes. Results of the field SG were compared to data provided by a nearby SG located in the controlled environment of an observatory building. For wet-temperate climate conditions, the system proves to provide gravity time series that are similarly precise as those of the observatory SG. At the same time, the field SG is more sensitive to hydrological variations than the observatory SG. We demonstrate that the gravity variations observed by the field setup are almost independent of the depth below the terrain surface where water storage changes occur (contrary to SGs in buildings), and thus the field SG system directly observes the total water storage change, i.e., the water balance, in its surroundings in an integrative way. We provide a framework to single out the water balance components actual evapotranspiration and lateral subsurface discharge from the gravity time series on annual to daily time scales. With about 99% and 85% of the gravity signal originating within a radius of 4000 and 200 meter around the instrument, respectively, this setup paves the road towards gravimetry as a continuous hydrological field monitoring technique at the landscape scale.

  16. Multiple time step integrators in ab initio molecular dynamics.

    PubMed

    Luehr, Nathan; Markland, Thomas E; Martínez, Todd J

    2014-02-28

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.

  17. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  18. Compact component for integrated quantum optic processing

    PubMed Central

    Sahu, Partha Pratim

    2015-01-01

    Quantum interference is indispensable to derive integrated quantum optic technologies (1–2). For further progress in large scale integration of quantum optic circuit, we have introduced first time two mode interference (TMI) coupler as an ultra compact component. The quantum interference varying with coupling length corresponding to the coupling ratio is studied and the larger HOM dip with peak visibility ~0.963 ± 0.009 is found at half coupling length of TMI coupler. Our results also demonstrate complex quantum interference with high fabrication tolerance and quantum visibility in TMI coupler. PMID:26584759

  19. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  20. Legally speaking: Federal Trade Commission issues positive advisory opinions on antitrust implications of clinical integration.

    PubMed

    Wright, Robert

    2009-10-01

    From time to time, groups of physicians in an area may determine that they would benefit from "integrating" their practices into an IPA, PHO, or other joint venture. The anticipated benefits may include economies of scale, the ability to coordinate care between primary care physicians and specialists, providing disease management services for patients with certain conditions, or a myriad of other reasons. A key characteristic of these proposed integrated models is the ability for the group as a whole to negotiate with insurance companies and self-funded health care plans. When a group reaches the point of negotiating collectively for the fees that a pay- or is going to pay for various services throughout the plan, possible antitrust implications arise.

  1. Integration of Air Quality & Exposure Models for Health Studies

    EPA Science Inventory

    The presentation describes a new community-scale tool called exposure model for individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM using outdoor concentrations, questionnaires, weather, and time-location information. In this modeling ...

  2. Integrating terrestrial through aquatic processing of water, carbon and nitrogen over hot, cold and lukewarm moments in mixed land use catchments

    NASA Astrophysics Data System (ADS)

    Band, L. E.; Lin, L.; Duncan, J. M.

    2017-12-01

    A major challenge in understanding and managing freshwater volumes and quality in mixed land use catchments is the detailed heterogeneity of topography, soils, canopy, and inputs of water and biogeochemicals. The short space and time scale dynamics of sources, transport and processing of water, carbon and nitrogen in natural and built environments can have a strong influence on the timing and magnitude of watershed runoff and nutrient production, ecosystem cycling and export. Hydroclimate variability induces a functional interchange of terrestrial and aquatic environments across their transition zone with the temporal and spatial expansion and contraction of soil wetness, standing and flowing water over seasonal, diurnal and storm event time scales. Variation in sources and retention of nutrients at these scales need to be understood and represented to design optimal mitigation strategies. This paper discusses the conceptual framework used to design both simulation and measurement approaches, and explores these dynamics using an integrated terrestrial-aquatic watershed model of coupled water-carbon-nitrogen processes at resolutions necessary to resolve "hot spot/hot moment" phenomena in two well studied catchments in Long Term Ecological Research sites. The potential utility of this approach for design and assessment of urban green infrastructure and stream restoration strategies is illustrated.

  3. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    PubMed

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  4. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    DTIC Science & Technology

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  5. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  6. Operational data products to support phenological research and applications at local to continental scales

    NASA Astrophysics Data System (ADS)

    Weltzin, J. F.

    2017-12-01

    Phenological data from a variety of platforms - across a range of spatial and temporal scales - are required to support research, natural resource management, and policy- and decision-making in a changing world. Observational and modeled phenological data, especially when integrated with associated biophysical data (e.g., climate, land-use/land-cover, hydrology) has great potential to provide multi-faceted information critical to decision support systems, vulnerability and risk assessments, change detection applications, and early-warning and forecasting systems for natural and modified ecosystems. The USA National Phenology Network (USA-NPN; www.usanpn.org) is a national-scale science and monitoring initiative focused on understanding the drivers and feedback effects of phenological variation in changing environments. The Network maintains a centralized database of over 10M ground-based observations of plants and animals for 1954-present, and leverages these data to produce operational data products for use by a variety of audiences, including researchers and resource managers. This presentation highlights our operational data products, including the tools, maps, and services that facilitate discovery, accessibility and usability of integrated phenological information. We describe (1) the data download tool, a customizable GUI that provides geospatially referenced raw, bounded or summarized organismal and climatological data and associated metadata (including calendars, time-series curves, and XY graphs), (2) the visualization tool, which provides opportunities to explore, visualize and export or download both organismal and modeled (gridded) products at daily time-steps and relatively fine spatial resolutions ( 2.5 km to 4 km) for the period 1980 to 6 days into the future, and (3) web services that enable custom query and download of map, feature and cover services in a variety of standard formats. These operational products facilitate scaling of integrated phenological and associated data to landscapes and regions, and enable novel investigations of biophysical interactions at unprecedented scales, e.g., continental-scale migrations.

  7. Non-autonomous Hénon--Heiles systems

    NASA Astrophysics Data System (ADS)

    Hone, Andrew N. W.

    1998-07-01

    Scaling similarity solutions of three integrable PDEs, namely the Sawada-Kotera, fifth order KdV and Kaup-Kupershmidt equations, are considered. It is shown that the resulting ODEs may be written as non-autonomous Hamiltonian equations, which are time-dependent generalizations of the well-known integrable Hénon-Heiles systems. The (time-dependent) Hamiltonians are given by logarithmic derivatives of the tau-functions (inherited from the original PDEs). The ODEs for the similarity solutions also have inherited Bäcklund transformations, which may be used to generate sequences of rational solutions as well as other special solutions related to the first Painlevé transcendent.

  8. CMOS Time-Resolved, Contact, and Multispectral Fluorescence Imaging for DNA Molecular Diagnostics

    PubMed Central

    Guo, Nan; Cheung, Ka Wai; Wong, Hiu Tung; Ho, Derek

    2014-01-01

    Instrumental limitations such as bulkiness and high cost prevent the fluorescence technique from becoming ubiquitous for point-of-care deoxyribonucleic acid (DNA) detection and other in-field molecular diagnostics applications. The complimentary metal-oxide-semiconductor (CMOS) technology, as benefited from process scaling, provides several advanced capabilities such as high integration density, high-resolution signal processing, and low power consumption, enabling sensitive, integrated, and low-cost fluorescence analytical platforms. In this paper, CMOS time-resolved, contact, and multispectral imaging are reviewed. Recently reported CMOS fluorescence analysis microsystem prototypes are surveyed to highlight the present state of the art. PMID:25365460

  9. Printed dose-recording tag based on organic complementary circuits and ferroelectric nonvolatile memories

    PubMed Central

    Nga Ng, Tse; Schwartz, David E.; Mei, Ping; Krusor, Brent; Kor, Sivkheng; Veres, Janos; Bröms, Per; Eriksson, Torbjörn; Wang, Yong; Hagel, Olle; Karlsson, Christer

    2015-01-01

    We have demonstrated a printed electronic tag that monitors time-integrated sensor signals and writes to nonvolatile memories for later readout. The tag is additively fabricated on flexible plastic foil and comprises a thermistor divider, complementary organic circuits, and two nonvolatile memory cells. With a supply voltage below 30 V, the threshold temperatures can be tuned between 0 °C and 80 °C. The time-temperature dose measurement is calibrated for minute-scale integration. The two memory bits are sequentially written in a thermometer code to provide an accumulated dose record. PMID:26307438

  10. Van der Waals epitaxial growth and optoelectronics of large-scale WSe2/SnS2 vertical bilayer p-n junctions.

    PubMed

    Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian

    2017-12-04

    High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.

  11. Spectral Definition of the Characteristic Times for Anomalous Diffusion in a Potential

    NASA Astrophysics Data System (ADS)

    Kalmykov, Yuri P.; Coffey, William T.; Titov, Serguey V.

    Characteristic times of the noninertial fractional diffusion of a particle in a potential are defined in terms of three time constants, viz., the integral, effective, and longest relaxation times. These times are described using the eigenvalues of the corresponding Fokker-Planck operator for the normal diffusion. Knowledge of them is sufficient to accurately predict the anomalous relaxation behavior for all time scales of interest. As a particular example, we consider the subdiffusion of a planar rotor in a double-well potential.

  12. Application of Collocated GPS and Seismic Sensors to Earthquake Monitoring and Early Warning

    PubMed Central

    Li, Xingxing; Zhang, Xiaohong; Guo, Bofeng

    2013-01-01

    We explore the use of collocated GPS and seismic sensors for earthquake monitoring and early warning. The GPS and seismic data collected during the 2011 Tohoku-Oki (Japan) and the 2010 El Mayor-Cucapah (Mexico) earthquakes are analyzed by using a tightly-coupled integration. The performance of the integrated results is validated by both time and frequency domain analysis. We detect the P-wave arrival and observe small-scale features of the movement from the integrated results and locate the epicenter. Meanwhile, permanent offsets are extracted from the integrated displacements highly accurately and used for reliable fault slip inversion and magnitude estimation. PMID:24284765

  13. The fossil record of phenotypic integration and modularity: A deep-time perspective on developmental and evolutionary dynamics.

    PubMed

    Goswami, Anjali; Binder, Wendy J; Meachen, Julie; O'Keefe, F Robin

    2015-04-21

    Variation is the raw material for natural selection, but the factors shaping variation are still poorly understood. Genetic and developmental interactions can direct variation, but there has been little synthesis of these effects with the extrinsic factors that can shape biodiversity over large scales. The study of phenotypic integration and modularity has the capacity to unify these aspects of evolutionary study by estimating genetic and developmental interactions through the quantitative analysis of morphology, allowing for combined assessment of intrinsic and extrinsic effects. Data from the fossil record in particular are central to our understanding of phenotypic integration and modularity because they provide the only information on deep-time developmental and evolutionary dynamics, including trends in trait relationships and their role in shaping organismal diversity. Here, we demonstrate the important perspective on phenotypic integration provided by the fossil record with a study of Smilodon fatalis (saber-toothed cats) and Canis dirus (dire wolves). We quantified temporal trends in size, variance, phenotypic integration, and direct developmental integration (fluctuating asymmetry) through 27,000 y of Late Pleistocene climate change. Both S. fatalis and C. dirus showed a gradual decrease in magnitude of phenotypic integration and an increase in variance and the correlation between fluctuating asymmetry and overall integration through time, suggesting that developmental integration mediated morphological response to environmental change in the later populations of these species. These results are consistent with experimental studies and represent, to our knowledge, the first deep-time validation of the importance of developmental integration in stabilizing morphological evolution through periods of environmental change.

  14. The fossil record of phenotypic integration and modularity: A deep-time perspective on developmental and evolutionary dynamics

    PubMed Central

    Goswami, Anjali; Binder, Wendy J.; Meachen, Julie; O’Keefe, F. Robin

    2015-01-01

    Variation is the raw material for natural selection, but the factors shaping variation are still poorly understood. Genetic and developmental interactions can direct variation, but there has been little synthesis of these effects with the extrinsic factors that can shape biodiversity over large scales. The study of phenotypic integration and modularity has the capacity to unify these aspects of evolutionary study by estimating genetic and developmental interactions through the quantitative analysis of morphology, allowing for combined assessment of intrinsic and extrinsic effects. Data from the fossil record in particular are central to our understanding of phenotypic integration and modularity because they provide the only information on deep-time developmental and evolutionary dynamics, including trends in trait relationships and their role in shaping organismal diversity. Here, we demonstrate the important perspective on phenotypic integration provided by the fossil record with a study of Smilodon fatalis (saber-toothed cats) and Canis dirus (dire wolves). We quantified temporal trends in size, variance, phenotypic integration, and direct developmental integration (fluctuating asymmetry) through 27,000 y of Late Pleistocene climate change. Both S. fatalis and C. dirus showed a gradual decrease in magnitude of phenotypic integration and an increase in variance and the correlation between fluctuating asymmetry and overall integration through time, suggesting that developmental integration mediated morphological response to environmental change in the later populations of these species. These results are consistent with experimental studies and represent, to our knowledge, the first deep-time validation of the importance of developmental integration in stabilizing morphological evolution through periods of environmental change. PMID:25901310

  15. GEWEX America Prediction Project (GAPP) Science and Implementation Plan

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The purpose of this Science and Implementation Plan is to describe GAPP science objectives and the activities required to meet these objectives, both specifically for the near-term and more generally for the longer-term. The GEWEX Americas Prediction Project (GAPP) is part of the Global Energy and Water Cycle Experiment (GEWEX) initiative that is aimed at observing, understanding and modeling the hydrological cycle and energy fluxes at various time and spatial scales. The mission of GAPP is to demonstrate skill in predicting changes in water resources over intraseasonal-to-interannual time scales, as an integral part of the climate system.

  16. Big whorls carry little whorls

    NASA Technical Reports Server (NTRS)

    Hunt, J. C. R.; Buell, J. C.; Wray, A. A.

    1987-01-01

    The aim of the research was to explore the space-time structure of homogeneous turbulence by computing and then interpreting the two-point spectra and correlations of the velocity and pressure fields. Many of these statistics are of considerable practical importance. In particular, it is of interest to compare the different time and length integral scales and microscales for Eulerian and Lagrangian qualities, and to compare the space and time spectra.

  17. Assessment and Optimization of Lidar Measurement Availability for Wind Turbine Control (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholbrock, F. A.; Fleming, P.; Wright, A.

    2014-02-01

    Integrating Lidar to improve wind turbine controls is a potential breakthrough for reducing the cost of wind energy. By providing undisturbed wind measurements up to 400m in front of the rotor, Lidar may provide an accurate update of the turbine inflow with a preview time of several seconds. Focusing on loads, several studies have evaluated potential reductions using integrated Lidar, either by simulation or full scale field testing.

  18. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  19. A Biological Perspective on the Meaning of Time

    NASA Technical Reports Server (NTRS)

    Rothschild, Lynn J.

    2014-01-01

    We have become impatient waiting for a web page to load, but the first member of our species evolved about 150,000 years ago - a geological instant as brief and as transitory as a text message. The shortest generation time of a bacterium is a sprint at under ten minutes, whereas a 200-year old whale, turtle or tree is not unknown. Life is a phenomenon that integrates processes ranging from the near instantaneous reactions of photosynthesis to the more stately pace of evolution. Here I will elucidate these processes with radically different time scales that go to creating and maintaining the diversity of life on earth, the clocks that nature uses to time them, and how modern biology is being used to alter the natural time scales.

  20. Heating in Integrable Time-Periodic Systems

    NASA Astrophysics Data System (ADS)

    Ishii, Takashi; Kuwahara, Tomotaka; Mori, Takashi; Hatano, Naomichi

    2018-06-01

    We investigate a heating phenomenon in periodically driven integrable systems that can be mapped to free-fermion models. We find that heating to the high-temperature state, which is a typical scenario in nonintegrable systems, can also appear in integrable time-periodic systems; the amount of energy absorption rises drastically near a frequency threshold where the Floquet-Magnus expansion diverges. As the driving period increases, we also observe that the effective temperatures of the generalized Gibbs ensemble for conserved quantities go to infinity. By the use of the scaling analysis, we reveal that, in the limit of infinite system size and driving period, the steady state after a long time is equivalent to the infinite-temperature state. We obtain the asymptotic behavior L-1 and T-2 as to how the steady state approaches the infinite-temperature state as the system size L and the driving period T increase.

  1. Generation of skeletal mechanism by means of projected entropy participation indices

    NASA Astrophysics Data System (ADS)

    Paolucci, Samuel; Valorani, Mauro; Ciottoli, Pietro Paolo; Galassi, Riccardo Malpica

    2017-11-01

    When the dynamics of reactive systems develop very-slow and very-fast time scales separated by a range of active time scales, with gaps in the fast/active and slow/active time scales, then it is possible to achieve multi-scale adaptive model reduction along-with the integration of the ODEs using the G-Scheme. The scheme assumes that the dynamics is decomposed into active, slow, fast, and invariant subspaces. We derive expressions that establish a direct link between time scales and entropy production by using estimates provided by the G-Scheme. To calculate the contribution to entropy production, we resort to a standard model of a constant pressure, adiabatic, batch reactor, where the mixture temperature of the reactants is initially set above the auto-ignition temperature. Numerical experiments show that the contribution to entropy production of the fast subspace is of the same magnitude as the error threshold chosen for the identification of the decomposition of the tangent space, and the contribution of the slow subspace is generally much smaller than that of the active subspace. The information on entropy production associated with reactions within each subspace is used to define an entropy participation index that is subsequently utilized for model reduction.

  2. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  3. A healthy heart is not a metronome: an integrative review of the heart's anatomy and heart rate variability.

    PubMed

    Shaffer, Fred; McCraty, Rollin; Zerr, Christopher L

    2014-01-01

    Heart rate variability (HRV), the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operate on different time scales to adapt to challenges and achieve optimal performance. This article briefly reviews neural regulation of the heart, and its basic anatomy, the cardiac cycle, and the sinoatrial and atrioventricular pacemakers. The cardiovascular regulation center in the medulla integrates sensory information and input from higher brain centers, and afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. This article reviews sympathetic and parasympathetic influences on the heart, and examines the interpretation of HRV and the association between reduced HRV, risk of disease and mortality, and the loss of regulatory capacity. This article also discusses the intrinsic cardiac nervous system and the heart-brain connection, through which afferent information can influence activity in the subcortical and frontocortical areas, and motor cortex. It also considers new perspectives on the putative underlying physiological mechanisms and properties of the ultra-low-frequency (ULF), very-low-frequency (VLF), low-frequency (LF), and high-frequency (HF) bands. Additionally, it reviews the most common time and frequency domain measurements as well as standardized data collection protocols. In its final section, this article integrates Porges' polyvagal theory, Thayer and colleagues' neurovisceral integration model, Lehrer et al.'s resonance frequency model, and the Institute of HeartMath's coherence model. The authors conclude that a coherent heart is not a metronome because its rhythms are characterized by both complexity and stability over longer time scales. Future research should expand understanding of how the heart and its intrinsic nervous system influence the brain.

  4. A healthy heart is not a metronome: an integrative review of the heart's anatomy and heart rate variability

    PubMed Central

    Shaffer, Fred; McCraty, Rollin; Zerr, Christopher L.

    2014-01-01

    Heart rate variability (HRV), the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operate on different time scales to adapt to challenges and achieve optimal performance. This article briefly reviews neural regulation of the heart, and its basic anatomy, the cardiac cycle, and the sinoatrial and atrioventricular pacemakers. The cardiovascular regulation center in the medulla integrates sensory information and input from higher brain centers, and afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. This article reviews sympathetic and parasympathetic influences on the heart, and examines the interpretation of HRV and the association between reduced HRV, risk of disease and mortality, and the loss of regulatory capacity. This article also discusses the intrinsic cardiac nervous system and the heart-brain connection, through which afferent information can influence activity in the subcortical and frontocortical areas, and motor cortex. It also considers new perspectives on the putative underlying physiological mechanisms and properties of the ultra-low-frequency (ULF), very-low-frequency (VLF), low-frequency (LF), and high-frequency (HF) bands. Additionally, it reviews the most common time and frequency domain measurements as well as standardized data collection protocols. In its final section, this article integrates Porges' polyvagal theory, Thayer and colleagues' neurovisceral integration model, Lehrer et al.'s resonance frequency model, and the Institute of HeartMath's coherence model. The authors conclude that a coherent heart is not a metronome because its rhythms are characterized by both complexity and stability over longer time scales. Future research should expand understanding of how the heart and its intrinsic nervous system influence the brain. PMID:25324790

  5. Regression-Based Identification of Behavior-Encoding Neurons During Large-Scale Optical Imaging of Neural Activity at Cellular Resolution

    PubMed Central

    Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre

    2011-01-01

    The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686

  6. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE PAGES

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe; ...

    2017-04-07

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  7. Toward a U.S. National Phenological Assessment

    NASA Astrophysics Data System (ADS)

    Henebry, Geoffrey M.; Betancourt, Julio L.

    2010-01-01

    Third USA National Phenology Network (USA-NPN) and Research Coordination Network (RCN) Annual Meeting; Milwaukee, Wisconsin, 5-9 October 2009; Directional climate change will have profound and lasting effects throughout society that are best understood through fundamental physical and biological processes. One such process is phenology: how the timing of recurring biological events is affected by biotic and abiotic forces. Phenology is an early and integrative indicator of climate change readily understood by nonspecialists. Phenology affects the planting, maturation, and harvesting of food and fiber; pollination; timing and magnitude of allergies and disease; recreation and tourism; water quantity and quality; and ecosystem function and resilience. Thus, phenology is the gateway to climatic effects on both managed and unmanaged ecosystems. Adaptation to climatic variability and change will require integration of phenological data and models with climatic forecasts at seasonal to decadal time scales. Changes in phenologies have already manifested myriad effects of directional climate change. As these changes continue, it is critical to establish a comprehensive suite of benchmarks that can be tracked and mapped at local to continental scales with observations and climate models.

  8. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    In this study, the increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course absolute quantitative metabolomics. This approach, termed “unsteady-state flux balance analysis” (uFBA), is applied to four cellular systems: three dynamic and one steady-state as a negative control. uFBA and FBA predictions are contrasted, and uFBA is found to be more accurate in predicting dynamic metabolic flux states for red blood cells, platelets, and Saccharomyces cerevisiae. Notably, only uFBAmore » predicts that stored red blood cells metabolize TCA intermediates to regenerate important cofactors, such as ATP, NADH, and NADPH. These pathway usage predictions were subsequently validated through 13C isotopic labeling and metabolic flux analysis in stored red blood cells. Utilizing time-course metabolomics data, uFBA provides an accurate method to predict metabolic physiology at the cellular scale for dynamic systems.« less

  9. An Integrated Scale for Measuring an Organizational Learning System

    ERIC Educational Resources Information Center

    Jyothibabu, C.; Farooq, Ayesha; Pradhan, Bibhuti Bhusan

    2010-01-01

    Purpose: The purpose of this paper is to develop an integrated measurement scale for an organizational learning system by capturing the learning enablers, learning results and performance outcome in an organization. Design/methodology/approach: A new measurement scale was developed by integrating and modifying two existing scales, identified…

  10. The proximal-to-distal sequence in upper-limb motions on multiple levels and time scales.

    PubMed

    Serrien, Ben; Baeyens, Jean-Pierre

    2017-10-01

    The proximal-to-distal sequence is a phenomenon that can be observed in a large variety of motions of the upper limbs in both humans and other mammals. The mechanisms behind this sequence are not completely understood and motor control theories able to explain this phenomenon are currently incomplete. The aim of this narrative review is to take a theoretical constraints-led approach to the proximal-to-distal sequence and provide a broad multidisciplinary overview of relevant literature. This sequence exists at multiple levels (brain, spine, muscles, kinetics and kinematics) and on multiple time scales (motion, motor learning and development, growth and possibly even evolution). We hypothesize that the proximodistal spatiotemporal direction on each time scale and level provides part of the organismic constraints that guide the dynamics at the other levels and time scales. The constraint-led approach in this review may serve as a first onset towards integration of evidence and a framework for further experimentation to reveal the dynamics of the proximal-to-distal sequence. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Is physiotherapy integrated virtual walking effective on pain, function, and kinesiophobia in patients with non-specific low-back pain? Randomised controlled trial.

    PubMed

    Yilmaz Yelvar, Gul Deniz; Çırak, Yasemin; Dalkılınç, Murat; Parlak Demir, Yasemin; Guner, Zeynep; Boydak, Ayşenur

    2017-02-01

    According to literature, virtual reality was found to reduce pain and kinesiophobia in patients with chronic pain. The purpose of the study was to investigate short-term effect of the virtual reality on pain, function, and kinesiophobia in patients with subacute and chronic non-specific low-back pain METHODS: This randomised controlled study in which 44 patients were randomly assigned to the traditional physiotherapy (control group, 22 subjects) or virtual walking integrated physiotherapy (experimental group, 22 subjects). Before and after treatment, Visual Analog Scale (VAS), TAMPA Kinesiophobia Scale (TKS), Oswestry Disability Index (ODI), Nottingham Health Profile (NHP), Timed-up and go Test (TUG), 6-Minute Walk Test (6MWT), and Single-Leg Balance Test were assessed. The interaction effect between group and time was assessed by using repeated-measures analysis of covariance. After treatment, both groups showed improvement in all parameters. However, VAS, TKS, TUG, and 6MWT scores showed significant differences in favor of the experimental group. Virtual walking integrated physiotherapy reduces pain and kinesiophobia, and improved function in patients with subacute and chronic non-specific low-back pain in short term.

  12. Decadal-Scale Forecasting of Climate Drivers for Marine Applications.

    PubMed

    Salinger, J; Hobday, A J; Matear, R J; O'Kane, T J; Risbey, J S; Dunstan, P; Eveson, J P; Fulton, E A; Feng, M; Plagányi, É E; Poloczanska, E S; Marshall, A G; Thompson, P A

    Climate influences marine ecosystems on a range of time scales, from weather-scale (days) through to climate-scale (hundreds of years). Understanding of interannual to decadal climate variability and impacts on marine industries has received less attention. Predictability up to 10 years ahead may come from large-scale climate modes in the ocean that can persist over these time scales. In Australia the key drivers of climate variability affecting the marine environment are the Southern Annular Mode, the Indian Ocean Dipole, the El Niño/Southern Oscillation, and the Interdecadal Pacific Oscillation, each has phases that are associated with different ocean circulation patterns and regional environmental variables. The roles of these drivers are illustrated with three case studies of extreme events-a marine heatwave in Western Australia, a coral bleaching of the Great Barrier Reef, and flooding in Queensland. Statistical and dynamical approaches are described to generate forecasts of climate drivers that can subsequently be translated to useful information for marine end users making decisions at these time scales. Considerable investment is still needed to support decadal forecasting including improvement of ocean-atmosphere models, enhancement of observing systems on all scales to support initiation of forecasting models, collection of important biological data, and integration of forecasts into decision support tools. Collaboration between forecast developers and marine resource sectors-fisheries, aquaculture, tourism, biodiversity management, infrastructure-is needed to support forecast-based tactical and strategic decisions that reduce environmental risk over annual to decadal time scales. © 2016 Elsevier Ltd. All rights reserved.

  13. On the Role of Multi-Scale Processes in CO2 Storage Security and Integrity

    NASA Astrophysics Data System (ADS)

    Pruess, K.; Kneafsey, T. J.

    2008-12-01

    Consideration of multiple scales in subsurface processes is usually referred to the spatial domain, where we may attempt to relate process descriptions and parameters from pore and bench (Darcy) scale to much larger field and regional scales. However, multiple scales occur also in the time domain, and processes extending over a broad range of time scales may be very relevant to CO2 storage and containment. In some cases, such as in the convective instability induced by CO2 dissolution in saline waters, space and time scales are coupled in the sense that perturbations induced by CO2 injection will grow concurrently over many orders of magnitude in both space and time. In other cases, CO2 injection may induce processes that occur on short time scales, yet may affect large regions. Possible examples include seismicity that may be triggered by CO2 injection, or hypothetical release events such as "pneumatic eruptions" that may discharge substantial amounts of CO2 over a short time period. This paper will present recent advances in our experimental and modeling studies of multi-scale processes. Specific examples that will be discussed include (1) the process of CO2 dissolution-diffusion-convection (DDC), that can greatly accelerate the rate at which free-phase CO2 is stored as aqueous solute; (2) self- enhancing and self-limiting processes during CO2 leakage through faults, fractures, or improperly abandoned wells; and (3) porosity and permeability reduction from salt precipitation near CO2 injection wells, and mitigation of corresponding injectivity loss. This work was supported by the Office of Basic Energy Sciences and by the Zero Emission Research and Technology project (ZERT) under Contract No. DE-AC02-05CH11231 with the U.S. Department of Energy.

  14. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  15. First Results on the Variability of Mid- and High-Latitude Ionospheric Electric Fields at 1- Second Time Scales

    NASA Astrophysics Data System (ADS)

    Ruohoniemi, J. M.; Greenwald, R. A.; Oksavik, K.; Baker, J. B.

    2007-12-01

    The electric fields at high latitudes are often modeled as a static pattern in the absence of variation in solar wind parameters or geomagnetic disturbance. However, temporal variability in the local electric fields on time scales of minutes for stable conditions has been reported and characterized statistically as an intrinsic property amounting to turbulence. We describe the results of applying a new technique to SuperDARN HF radar observations of ionospheric plasma convection at middle and high latitudes that gives views of the variability of the electric fields at sub-second time scales. We address the question of whether there is a limit to the temporal scale of the electric field variability and consider whether the turbulence on minute time scales is due to organized but unresolved behavior. The basis of the measurements is the ability to record raw samples from the individual multipulse sequences that are transmitted during the standard 3 or 6-second SuperDARN integration period; a backscattering volume is then effectively sampled at a cadence of 200 ms. The returns from the individual sequences are often sufficiently well-ordered to permit a sequence-by-sequence characterization of the electric field and backscattered power. We attempt a statistical characterization of the variability at these heretofore inaccessible time scales and consider how variability is influenced by solar wind and magentospheric factors.

  16. Scaling laws and vortex profiles in two-dimensional decaying turbulence.

    PubMed

    Laval, J P; Chavanis, P H; Dubrulle, B; Sire, C

    2001-06-01

    We use high resolution numerical simulations over several hundred of turnover times to study the influence of small scale dissipation onto vortex statistics in 2D decaying turbulence. A scaling regime is detected when the scaling laws are expressed in units of mean vorticity and integral scale, like predicted in Carnevale et al., Phys. Rev. Lett. 66, 2735 (1991), and it is observed that viscous effects spoil this scaling regime. The exponent controlling the decay of the number of vortices shows some trends toward xi=1, in agreement with a recent theory based on the Kirchhoff model [C. Sire and P. H. Chavanis, Phys. Rev. E 61, 6644 (2000)]. In terms of scaled variables, the vortices have a similar profile with a functional form related to the Fermi-Dirac distribution.

  17. Incorporation of DNA barcoding into a large-scale biomonitoring program: opportunities and pitfalls

    EPA Science Inventory

    Taxonomic identification of benthic macroinvertebrates is critical to protocols used to assess the biological integrity of aquatic ecosystems. The time, expense, and inherent error rate of species-level morphological identifications has necessitated use of genus- or family-level ...

  18. Mouse Activity across Time Scales: Fractal Scenarios

    PubMed Central

    Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.

    2014-01-01

    In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515

  19. Balanced optical-microwave phase detector for sub-femtosecond optical-RF synchronization

    DOE PAGES

    Peng, Michael Y.; Kalaydzhyan, Aram; Kärtner, Franz X.

    2014-10-23

    We demonstrate that balanced optical-microwave phase detectors (BOMPD) are capable of optical-RF synchronization with sub-femtosecond residual timing jitter for large-scale timing distribution systems. RF-to-optical synchronization is achieved with a long-term stability of < 1 fs RMS and < 7 fs pk-pk drift for over 10 hours and short-term stability of < 2 fs RMS jitter integrated from 1 Hz to 200 kHz as well as optical-to-RF synchronization with 0.5 fs RMS jitter integrated from 1 Hz to 20 kHz. Moreover, we achieve a –161 dBc/Hz noise floor that integrates well into the sub-fs regime and measure a nominal 50-dB AM-PMmore » suppression ratio with potential improvement via DC offset adjustment.« less

  20. A Multi-Scale, Integrated Approach to Representing Watershed Systems

    NASA Astrophysics Data System (ADS)

    Ivanov, Valeriy; Kim, Jongho; Fatichi, Simone; Katopodes, Nikolaos

    2014-05-01

    Understanding and predicting process dynamics across a range of scales are fundamental challenges for basic hydrologic research and practical applications. This is particularly true when larger-spatial-scale processes, such as surface-subsurface flow and precipitation, need to be translated to fine space-time scale dynamics of processes, such as channel hydraulics and sediment transport, that are often of primary interest. Inferring characteristics of fine-scale processes from uncertain coarse-scale climate projection information poses additional challenges. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion, and sediment transport, tRIBS+VEGGIE-FEaST. The model targets to take the advantage of the current generation of wealth of data representing watershed topography, vegetation, soil, and landuse, as well as to explore the hydrological effects of physical factors and their feedback mechanisms over a range of scales. We illustrate how the modeling system connects precipitation-hydrologic runoff partition process to the dynamics of flow, erosion, and sedimentation, and how the soil's substrate condition can impact the latter processes, resulting in a non-unique response. We further illustrate an approach to using downscaled climate change information with a process-based model to infer the moments of hydrologic variables in future climate conditions and explore the impact of climate information uncertainty.

  1. Least-rattling feedback from strong time-scale separation

    NASA Astrophysics Data System (ADS)

    Chvykov, Pavel; England, Jeremy

    2018-03-01

    In most interacting many-body systems associated with some "emergent phenomena," we can identify subgroups of degrees of freedom that relax on dramatically different time scales. Time-scale separation of this kind is particularly helpful in nonequilibrium systems where only the fast variables are subjected to external driving; in such a case, it may be shown through elimination of fast variables that the slow coordinates effectively experience a thermal bath of spatially varying temperature. In this paper, we investigate how such a temperature landscape arises according to how the slow variables affect the character of the driven quasisteady state reached by the fast variables. Brownian motion in the presence of spatial temperature gradients is known to lead to the accumulation of probability density in low-temperature regions. Here, we focus on the implications of attraction to low effective temperature for the long-term evolution of slow variables. After quantitatively deriving the temperature landscape for a general class of overdamped systems using a path-integral technique, we then illustrate in a simple dynamical system how the attraction to low effective temperature has a fine-tuning effect on the slow variable, selecting configurations that bring about exceptionally low force fluctuation in the fast-variable steady state. We furthermore demonstrate that a particularly strong effect of this kind can take place when the slow variable is tuned to bring about orderly, integrable motion in the fast dynamics that avoids thermalizing energy absorbed from the drive. We thus point to a potentially general feedback mechanism in multi-time-scale active systems, that leads to the exploration of slow variable space, as if in search of fine tuning for a "least-rattling" response in the fast coordinates.

  2. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  3. Chromatin Landscapes of Retroviral and Transposon Integration Profiles

    PubMed Central

    Badhai, Jitendra; Rust, Alistair G.; Rad, Roland; Hilkens, John; Berns, Anton; van Lohuizen, Maarten; Wessels, Lodewyk F. A.; de Ridder, Jeroen

    2014-01-01

    The ability of retroviruses and transposons to insert their genetic material into host DNA makes them widely used tools in molecular biology, cancer research and gene therapy. However, these systems have biases that may strongly affect research outcomes. To address this issue, we generated very large datasets consisting of to unselected integrations in the mouse genome for the Sleeping Beauty (SB) and piggyBac (PB) transposons, and the Mouse Mammary Tumor Virus (MMTV). We analyzed (epi)genomic features to generate bias maps at both local and genome-wide scales. MMTV showed a remarkably uniform distribution of integrations across the genome. More distinct preferences were observed for the two transposons, with PB showing remarkable resemblance to bias profiles of the Murine Leukemia Virus. Furthermore, we present a model where target site selection is directed at multiple scales. At a large scale, target site selection is similar across systems, and defined by domain-oriented features, namely expression of proximal genes, proximity to CpG islands and to genic features, chromatin compaction and replication timing. Notable differences between the systems are mainly observed at smaller scales, and are directed by a diverse range of features. To study the effect of these biases on integration sites occupied under selective pressure, we turned to insertional mutagenesis (IM) screens. In IM screens, putative cancer genes are identified by finding frequently targeted genomic regions, or Common Integration Sites (CISs). Within three recently completed IM screens, we identified 7%–33% putative false positive CISs, which are likely not the result of the oncogenic selection process. Moreover, results indicate that PB, compared to SB, is more suited to tag oncogenes. PMID:24721906

  4. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  5. A Spectral Element Discretisation on Unstructured Triangle / Tetrahedral Meshes for Elastodynamics

    NASA Astrophysics Data System (ADS)

    May, Dave A.; Gabriel, Alice-A.

    2017-04-01

    The spectral element method (SEM) defined over quadrilateral and hexahedral element geometries has proven to be a fast, accurate and scalable approach to study wave propagation phenomena. In the context of regional scale seismology and or simulations incorporating finite earthquake sources, the geometric restrictions associated with hexahedral elements can limit the applicability of the classical quad./hex. SEM. Here we describe a continuous Galerkin spectral element discretisation defined over unstructured meshes composed of triangles (2D), or tetrahedra (3D). The method uses a stable, nodal basis constructed from PKD polynomials and thus retains the spectral accuracy and low dispersive properties of the classical SEM, in addition to the geometric versatility provided by unstructured simplex meshes. For the particular basis and quadrature rule we have adopted, the discretisation results in a mass matrix which is not diagonal, thereby mandating linear solvers be utilised. To that end, we have developed efficient solvers and preconditioners which are robust with respect to the polynomial order (p), and possess high arithmetic intensity. Furthermore, we also consider using implicit time integrators, together with a p-multigrid preconditioner to circumvent the CFL condition. Implicit time integrators become particularly relevant when considering solving problems on poor quality meshes, or meshes containing elements with a widely varying range of length scales - both of which frequently arise when meshing non-trivial geometries. We demonstrate the applicability of the new method by examining a number of two- and three-dimensional wave propagation scenarios. These scenarios serve to characterise the accuracy and cost of the new method. Lastly, we will assess the potential benefits of using implicit time integrators for regional scale wave propagation simulations.

  6. On the estimation and detection of the Rees-Sciama effect

    NASA Astrophysics Data System (ADS)

    Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.

    2017-02-01

    Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.

  7. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, Noel

    2013-04-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  8. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, N.

    2012-12-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  9. Robot-assisted vs. sensory integration training in treating gait and balance dysfunctions in patients with multiple sclerosis: a randomized controlled trial

    PubMed Central

    Gandolfi, Marialuisa; Geroin, Christian; Picelli, Alessandro; Munari, Daniele; Waldner, Andreas; Tamburin, Stefano; Marchioretto, Fabio; Smania, Nicola

    2014-01-01

    Background: Extensive research on both healthy subjects and patients with central nervous damage has elucidated a crucial role of postural adjustment reactions and central sensory integration processes in generating and “shaping” locomotor function, respectively. Whether robotic-assisted gait devices might improve these functions in Multiple sclerosis (MS) patients is not fully investigated in literature. Purpose: The aim of this study was to compare the effectiveness of end-effector robot-assisted gait training (RAGT) and sensory integration balance training (SIBT) in improving walking and balance performance in patients with MS. Methods: Twenty-two patients with MS (EDSS: 1.5–6.5) were randomly assigned to two groups. The RAGT group (n = 12) underwent end-effector system training. The SIBT group (n = 10) underwent specific balance exercises. Each patient received twelve 50-min treatment sessions (2 days/week). A blinded rater evaluated patients before and after treatment as well as 1 month post treatment. Primary outcomes were walking speed and Berg Balance Scale. Secondary outcomes were the Activities-specific Balance Confidence Scale, Sensory Organization Balance Test, Stabilometric Assessment, Fatigue Severity Scale, cadence, step length, single and double support time, Multiple Sclerosis Quality of Life-54. Results: Between groups comparisons showed no significant differences on primary and secondary outcome measures over time. Within group comparisons showed significant improvements in both groups on the Berg Balance Scale (P = 0.001). Changes approaching significance were found on gait speed (P = 0.07) only in the RAGT group. Significant changes in balance task-related domains during standing and walking conditions were found in the SIBT group. Conclusion: Balance disorders in patients with MS may be ameliorated by RAGT and by SIBT. PMID:24904361

  10. Science of Integrated Approaches to Natural Resources Management

    NASA Astrophysics Data System (ADS)

    Tengberg, Anna; Valencia, Sandra

    2017-04-01

    To meet multiple environmental objectives, integrated programming is becoming increasingly important for the Global Environmental Facility (GEF), the financial mechanism of the multilateral environmental agreements, including the United Nations Convention to Combat Desertification (UNCCD). Integration of multiple environmental, social and economic objectives also contributes to the achievement of the Sustainable Development Goals (SDGs) in a timely and cost-effective way. However, integration is often not well defined. This paper therefore focuses on identifying key aspects of integration and assessing their implementation in natural resources management (NRM) projects. To that end, we draw on systems thinking literature, and carry out an analysis of a random sample of GEF integrated projects and in-depth case studies demonstrating lessons learned and good practices in addressing land degradation and other NRM challenges. We identify numerous challenges and opportunities of integrated approaches that need to be addressed in order to maximise the catalytic impact of the GEF during problem diagnosis, project design, implementation and governance. We highlight the need for projects to identify clearer system boundaries and main feedback mechanisms within those boundaries, in order to effectively address drivers of environmental change. We propose a theory of change for Integrated Natural Resources Management (INRM) projects, where short-term environmental and socio-economic benefits will first accrue at the local level. Implementation of improved INRM technologies and practices at the local level can be extended through spatial planning, strengthening of innovation systems, and financing and incentive mechanisms at the watershed and/or landscape/seascape level to sustain and enhance ecosystem services at larger scales and longer time spans. We conclude that the evolving scientific understanding of factors influencing social, technical and institutional innovations and transitions towards sustainable management of natural resources should be harnessed and integrated into GEF's influencing models and theory of change, and be coupled with updated approaches for learning, adaptive management and scaling up.

  11. VME rollback hardware for time warp multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Robb, Michael J.; Buzzell, Calvin A.

    1992-01-01

    The purpose of the research effort is to develop and demonstrate innovative hardware to implement specific rollback and timing functions required for efficient queue management and precision timekeeping in multiprocessor discrete event simulations. The previously completed phase 1 effort demonstrated the technical feasibility of building hardware modules which eliminate the state saving overhead of the Time Warp paradigm used in distributed simulations on multiprocessor systems. The current phase 2 effort will build multiple pre-production rollback hardware modules integrated with a network of Sun workstations, and the integrated system will be tested by executing a Time Warp simulation. The rollback hardware will be designed to interface with the greatest number of multiprocessor systems possible. The authors believe that the rollback hardware will provide for significant speedup of large scale discrete event simulation problems and allow multiprocessors using Time Warp to dramatically increase performance.

  12. An investigation of turbulent transport in the extreme lower atmosphere

    NASA Technical Reports Server (NTRS)

    Koper, C. A., Jr.; Sadeh, W. Z.

    1975-01-01

    A model in which the Lagrangian autocorrelation is expressed by a domain integral over a set of usual Eulerian autocorrelations acquired concurrently at all points within a turbulence box is proposed along with a method for ascertaining the statistical stationarity of turbulent velocity by creating an equivalent ensemble to investigate the flow in the extreme lower atmosphere. Simultaneous measurements of turbulent velocity on a turbulence line along the wake axis were carried out utilizing a longitudinal array of five hot-wire anemometers remotely operated. The stationarity test revealed that the turbulent velocity is approximated as a realization of a weakly self-stationary random process. Based on the Lagrangian autocorrelation it is found that: (1) large diffusion time predominated; (2) ratios of Lagrangian to Eulerian time and spatial scales were smaller than unity; and, (3) short and long diffusion time scales and diffusion spatial scales were constrained within their Eulerian counterparts.

  13. A Power Efficient Exaflop Computer Design for Global Cloud System Resolving Climate Models.

    NASA Astrophysics Data System (ADS)

    Wehner, M. F.; Oliker, L.; Shalf, J.

    2008-12-01

    Exascale computers would allow routine ensemble modeling of the global climate system at the cloud system resolving scale. Power and cost requirements of traditional architecture systems are likely to delay such capability for many years. We present an alternative route to the exascale using embedded processor technology to design a system optimized for ultra high resolution climate modeling. These power efficient processors, used in consumer electronic devices such as mobile phones, portable music players, cameras, etc., can be tailored to the specific needs of scientific computing. We project that a system capable of integrating a kilometer scale climate model a thousand times faster than real time could be designed and built in a five year time scale for US$75M with a power consumption of 3MW. This is cheaper, more power efficient and sooner than any other existing technology.

  14. Fractal Characterization of Multitemporal Scaled Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Lam, Nina Siu-Ngan; Qiu, Hong-lie

    1998-01-01

    Scale is an "innate" concept in geographic information systems. It is recognized as something that is intrinsic to the ingestion, storage, manipulation, analysis, modeling, and output of space and time data within a GIS purview, yet the relative meaning and ramifications of scaling spatial and temporal data from this perspective remain enigmatic. As GISs become more sophisticated as a product of more robust software and more powerful computer systems, there is an urgent need to examine the issue of scale, and its relationship to the whole body of spatiotemporal data, as imparted in GISS. Scale is fundamental to the characterization of geo-spatial data as represented in GISS, but we have relatively little insight on the effects of, or how to measure the effects of, scale in representing multiscaled data; i.e., data that are acquired in different formats (e.g., map, digital) and exist in varying spatial, temporal, and in the case of remote sensing data, radiometric, configurations. This is particularly true in the emerging era of Integrated GISs (IGIS), wherein spatial data in a variety of formats (e.g., raster, vector) are combined with multiscaled remote sensing data, capable of performing highly sophisticated space-time data analyses and modeling. Moreover, the complexities associated with the integration of multiscaled data sets in a multitude of formats are exacerbated by the confusion of what the term "scale" is from a multidisciplinary perspective; i.e., "scale" takes on significantly different meanings depending upon one's disciplinary background and spatial perspective which can lead to substantive confusion in the input, manipulation, analyses, and output of IGISs (Quattrochi, 1993). Hence, we must begin to look at the universality of scale and begin to develop the theory, methods, and techniques necessary to advance knowledge on the "Science of Scale" across a wide number of spatial disciplines that use GISs.

  15. A Microcomputer-Based Program for Printing Check Plots of Integrated Circuits Specified in Caltech Intermediate Form.

    DTIC Science & Technology

    1984-12-01

    only four transistors[5]. Each year since that time, the semiconductor industry has con- sistently improved the quality of the fabrication tech- niques...rarely took place at universities and was almost exclusively confined to industry . IC design techniques were developed, tested, and taught only in the...community, it is not uncommon for industry to borrow ideas and even particular programs from these university designed tools. The Very Large Scale Integration

  16. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  17. Emergent Phototactic Responses of Cyanobacteria under Complex Light Regimes

    PubMed Central

    Chau, Rosanna Man Wah

    2017-01-01

    ABSTRACT Environmental cues can stimulate a variety of single-cell responses, as well as collective behaviors that emerge within a bacterial community. These responses require signal integration and transduction, which can occur on a variety of time scales and often involve feedback between processes, for example, between growth and motility. Here, we investigate the dynamics of responses of the phototactic, unicellular cyanobacterium Synechocystis sp. PCC6803 to complex light inputs that simulate the natural environments that cells typically encounter. We quantified single-cell motility characteristics in response to light of different wavelengths and intensities. We found that red and green light primarily affected motility bias rather than speed, while blue light inhibited motility altogether. When light signals were simultaneously presented from different directions, cells exhibited phototaxis along the vector sum of the light directions, indicating that cells can sense and combine multiple signals into an integrated motility response. Under a combination of antagonistic light signal regimes (phototaxis-promoting green light and phototaxis-inhibiting blue light), the ensuing bias was continuously tuned by competition between the wavelengths, and the community response was dependent on both bias and cell growth. The phototactic dynamics upon a rapid light shift revealed a wavelength dependence on the time scales of photoreceptor activation/deactivation. Thus, Synechocystis cells achieve exquisite integration of light inputs at the cellular scale through continuous tuning of motility, and the pattern of collective behavior depends on single-cell motility and population growth. PMID:28270586

  18. Counseling Framework for HIV-Serodiscordant Couples on the Integrated Use of Antiretroviral Therapy and Pre-exposure Prophylaxis for HIV Prevention.

    PubMed

    Morton, Jennifer F; Celum, Connie; Njoroge, John; Nakyanzi, Agnes; Wakhungu, Imeldah; Tindimwebwa, Edna; Ongachi, Snaidah; Sedah, Eric; Okwero, Emmanuel; Ngure, Kenneth; Odoyo, Josephine; Bulya, Nulu; Haberer, Jessica E; Baeten, Jared M; Heffron, Renee

    2017-01-01

    For HIV-serodiscordant couples, integrated delivery of antiretroviral therapy (ART) for HIV-positive partners and time-limited pre-exposure prophylaxis (PrEP) for negative partners virtually eliminates HIV transmission. Standardized messaging, sensitive to the barriers and motivators to HIV treatment and prevention, is needed for widespread scale-up of this approach. Within the Partners Demonstration Project, a prospective interventional project among 1013 serodiscordant couples in Kenya and Uganda, we offered ART to eligible HIV-positive partners and PrEP to HIV-negative partners before ART initiation and through the HIV-positive partner's first 6 months of ART use. We conducted individual and group discussions with counseling staff to elicit the health communication framework and key messages about ART and PrEP that were delivered to couples. Counseling sessions for serodiscordant couples about PrEP and ART included discussions of HIV serodiscordance, PrEP and ART initiation and integrated use, and PrEP discontinuation. ART messages emphasized daily, lifelong use for treatment and prevention, adherence, viral suppression, resistance, side effects, and safety of ART during pregnancy. PrEP messages emphasized daily dosing, time-limited PrEP use until the HIV-positive partner sustained 6 months of high adherence to ART, adherence, safety during conception, side effects, and other risks for HIV. Counseling messages for HIV-serodiscordant couples are integral to the delivery of time-limited PrEP as a "bridge" to ART-driven viral suppression. Their incorporation into programmatic scale-up will maximize intervention impact on the global epidemic.

  19. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  20. Numerical integration and optimization of motions for multibody dynamic systems

    NASA Astrophysics Data System (ADS)

    Aguilar Mayans, Joan

    This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.

  1. Real-time micro-modelling of city evacuations

    NASA Astrophysics Data System (ADS)

    Löhner, Rainald; Haug, Eberhard; Zinggerling, Claudio; Oñate, Eugenio

    2018-01-01

    A methodology to integrate geographical information system (GIS) data with large-scale pedestrian simulations has been developed. Advances in automatic data acquisition and archiving from GIS databases, automatic input for pedestrian simulations, as well as scalable pedestrian simulation tools have made it possible to simulate pedestrians at the individual level for complete cities in real time. An example that simulates the evacuation of the city of Barcelona demonstrates that this is now possible. This is the first step towards a fully integrated crowd prediction and management tool that takes into account not only data gathered in real time from cameras, cell phones or other sensors, but also merges these with advanced simulation tools to predict the future state of the crowd.

  2. Long-term forecasting of internet backbone traffic.

    PubMed

    Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe

    2005-09-01

    We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.

  3. SWIFT Differentiated Technical Assistance. White Paper

    ERIC Educational Resources Information Center

    McCart, Amy; McSheehan, Michael; Sailor, Wayne; Mitchiner, Melinda; Quirk, Carol

    2016-01-01

    The Schoolwide Integrated Framework for Transformation (SWIFT) employs six technical assistance (TA) practices that support an initial transformation process while simultaneously building system capacity to sustain and scale up equity-based inclusion in additional schools and districts over time. This paper explains these individual practices and…

  4. The evolution of ecosystem ascendency in a complex systems based model.

    PubMed

    Brinck, Katharina; Jensen, Henrik Jeldtoft

    2017-09-07

    General patterns in ecosystem development can shed light on driving forces behind ecosystem formation and recovery and have been of long interest. In recent years, the need for integrative and process oriented approaches to capture ecosystem growth, development and organisation, as well as the scope of information theory as a descriptive tool has been addressed from various sides. However data collection of ecological network flows is difficult and tedious and comprehensive models are lacking. We use a hierarchical version of the Tangled Nature Model of evolutionary ecology to study the relationship between structure, flow and organisation in model ecosystems, their development over evolutionary time scales and their relation to ecosystem stability. Our findings support the validity of ecosystem ascendency as a meaningful measure of ecosystem organisation, which increases over evolutionary time scales and significantly drops during periods of disturbance. The results suggest a general trend towards both higher integrity and increased stability driven by functional and structural ecosystem coadaptation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The Trapping Index: How to integrate the Eulerian and the Lagrangian approach for the computation of the transport time scales of semi-enclosed basins.

    PubMed

    Cucco, Andrea; Umgiesser, Georg

    2015-09-15

    In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    PubMed Central

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for “temporal receptive windows” that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision-making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or EEG/MEG) by taking into account inter-areal heterogeneity. PMID:26439530

  7. Multiple-scale neuroendocrine signals connect brain and pituitary hormone rhythms

    PubMed Central

    Romanò, Nicola; Guillou, Anne; Martin, Agnès O; Mollard, Patrice

    2017-01-01

    Small assemblies of hypothalamic “parvocellular” neurons release their neuroendocrine signals at the median eminence (ME) to control long-lasting pituitary hormone rhythms essential for homeostasis. How such rapid hypothalamic neurotransmission leads to slowly evolving hormonal signals remains unknown. Here, we show that the temporal organization of dopamine (DA) release events in freely behaving animals relies on a set of characteristic features that are adapted to the dynamic dopaminergic control of pituitary prolactin secretion, a key reproductive hormone. First, locally generated DA release signals are organized over more than four orders of magnitude (0.001 Hz–10 Hz). Second, these DA events are finely tuned within and between frequency domains as building blocks that recur over days to weeks. Third, an integration time window is detected across the ME and consists of high-frequency DA discharges that are coordinated within the minutes range. Thus, a hierarchical combination of time-scaled neuroendocrine signals displays local–global integration to connect brain–pituitary rhythms and pace hormone secretion. PMID:28193889

  8. Flow topologies and turbulence scales in a jet-in-cross-flow

    DOE PAGES

    Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem

    2015-04-03

    This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less

  9. Turbulent transport measurements with a laser Doppler velocimeter.

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time-average spectrum contains information only for times shorter than the Lagrangian-integral time scale of the turbulence. To examine the long-time behavior, one must use either extremely small scattering angles, much-longer-wavelength radiation, or a different mode of signal analysis, e.g., FM detection.

  10. The role of topography on catchment‐scale water residence time

    USGS Publications Warehouse

    McGuire, K.J.; McDonnell, Jeffery J.; Weiler, M.; Kendall, C.; McGlynn, B.L.; Welker, J.M.; Seibert, J.

    2005-01-01

    The age, or residence time, of water is a fundamental descriptor of catchment hydrology, revealing information about the storage, flow pathways, and source of water in a single integrated measure. While there has been tremendous recent interest in residence time estimation to characterize watersheds, there are relatively few studies that have quantified residence time at the watershed scale, and fewer still that have extended those results beyond single catchments to larger landscape scales. We examined topographic controls on residence time for seven catchments (0.085–62.4 km2) that represent diverse geologic and geomorphic conditions in the western Cascade Mountains of Oregon. Our primary objective was to determine the dominant physical controls on catchment‐scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin. Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network. We found that base flow mean residence times for exponential distributions ranged from 0.8 to 3.3 years. Mean residence time showed no correlation to basin area (r2 < 0.01) but instead was correlated (r2 = 0.91) to catchment terrain indices representing the flow path distance and flow path gradient to the stream network. These results illustrate that landscape organization (i.e., topography) rather than basin area controls catchment‐scale transport. Results from this study may provide a framework for describing scale‐invariant transport across climatic and geologic conditions, whereby the internal form and structure of the basin defines the first‐order control on base flow residence time.

  11. A comparative ecological risk assessment of Orimulsion and Fuel Oil No. 6 in the coastal marine environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harwell, M.; Ault, J.; Gentile, J.

    1995-12-31

    The conduct of comparative ecological risk assessments (CERA) resulting from the release of anthropogenic stressors into coastal marine environments requires theoretical and methodological innovations to integrate contaminant exposure with populations at risk over time and space scales. Consequently, predicted risks must be scaled to allow comparisons of relative ecological impacts in three physical dimensions plus time. This study was designed to compare the risks from hypothetical spills of Orimulsion and Fuel Oil No. 6 into the Tampa Bay ecosystem. The CERA framework used in this study integrates numerical hydrodynamic and transport-and-fate, toxicological, and biological models with extensive spatially explicit databasesmore » that describe the distributions of critical species and habitats. The presentation of the comparative ecological risks is facilitated by visualization and GIS techniques to allow realistic comparisons of toxicant exposures and their co-occurrence with key biological resources over time and across the seascape. A scaling methodology is presented that uses toxicological data as scalars for graphically representing the ecological effects associated with exposure levels for each scenario simulation. The CERA model serves as an interactive tool for assessing the relative ecological consequences of a range of potential exposure scenarios and for forecasting the longer-term productivity of critical biological resources and habitats that are key to ecosystem structure and function.« less

  12. Integration and Typologies of Vulnerability to Climate Change: A Case Study from Australian Wheat Sheep Zones.

    PubMed

    Huai, Jianjun

    2016-09-27

    Although the integrated indicator methods have become popular for assessing vulnerability to climate change, their proliferation has introduced a confusing array of scales and indicators that cause a science-policy gap. I argue for a clear adaptation pathway in an "integrative typology" of regional vulnerability that matches appropriate scales, optimal measurements and adaptive strategies in a six-dimensional and multi-level analysis framework of integration and typology inspired by the "5W1H" questions: "Who is concerned about how to adapt to the vulnerability of what to what in some place (where) at some time (when)?" Using the case of the vulnerability of wheat, barley and oats to drought in Australian wheat sheep zones during 1978-1999, I answer the "5W1H" questions through establishing the "six typologies" framework. I then optimize the measurement of vulnerability through contrasting twelve kinds of vulnerability scores with the divergence of crops yields from their regional mean. Through identifying the socioeconomic constraints, I propose seven generic types of crop-drought vulnerability and local adaptive strategy. Our results illustrate that the process of assessing vulnerability and selecting adaptations can be enhanced using a combination of integration, optimization and typology, which emphasize dynamic transitions and transformations between integration and typology.

  13. Understanding relationships among ecosystem services across spatial scales and over time

    NASA Astrophysics Data System (ADS)

    Qiu, Jiangxiao; Carpenter, Stephen R.; Booth, Eric G.; Motew, Melissa; Zipper, Samuel C.; Kucharik, Christopher J.; Loheide, Steven P., II; Turner, Monica G.

    2018-05-01

    Sustaining ecosystem services (ES), mitigating their tradeoffs and avoiding unfavorable future trajectories are pressing social-environmental challenges that require enhanced understanding of their relationships across scales. Current knowledge of ES relationships is often constrained to one spatial scale or one snapshot in time. In this research, we integrated biophysical modeling with future scenarios to examine changes in relationships among eight ES indicators from 2001–2070 across three spatial scales—grid cell, subwatershed, and watershed. We focused on the Yahara Watershed (Wisconsin) in the Midwestern United States—an exemplar for many urbanizing agricultural landscapes. Relationships among ES indicators changed over time; some relationships exhibited high interannual variations (e.g. drainage vs. food production, nitrate leaching vs. net ecosystem exchange) and even reversed signs over time (e.g. perennial grass production vs. phosphorus yield). Robust patterns were detected for relationships among some regulating services (e.g. soil retention vs. water quality) across three spatial scales, but other relationships lacked simple scaling rules. This was especially true for relationships of food production vs. water quality, and drainage vs. number of days with runoff >10 mm, which differed substantially across spatial scales. Our results also showed that local tradeoffs between food production and water quality do not necessarily scale up, so reducing local tradeoffs may be insufficient to mitigate such tradeoffs at the watershed scale. We further synthesized these cross-scale patterns into a typology of factors that could drive changes in ES relationships across scales: (1) effects of biophysical connections, (2) effects of dominant drivers, (3) combined effects of biophysical linkages and dominant drivers, and (4) artificial scale effects, and concluded with management implications. Our study highlights the importance of taking a dynamic perspective and accounting for spatial scales in monitoring and management to sustain future ES.

  14. Integrating field plots, lidar, and landsat time series to provide temporally consistent annual estimates of biomass from 1990 to present

    Treesearch

    Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang

    2015-01-01

    We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...

  15. Using Landsat Time-Series and LiDAR to Inform Aboveground Forest Biomass Baselines in Northern Minnesota, USA

    Treesearch

    Ram K. Deo; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Michael J. Falkowski; Warren B. Cohen

    2017-01-01

    The publicly accessible archive of Landsat imagery and increasing regional-scale LiDAR acquisitions offer an opportunity to periodically estimate aboveground forest biomass (AGB) from 1990 to the present to alignwith the reporting needs ofNationalGreenhouseGas Inventories (NGHGIs). This study integrated Landsat time-series data, a state-wide LiDAR dataset, and a recent...

  16. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  17. Addressing challenges in scaling up TB and HIV treatment integration in rural primary healthcare clinics in South Africa (SUTHI): a cluster randomized controlled trial protocol.

    PubMed

    Naidoo, Kogieleum; Gengiah, Santhanalakshmi; Yende-Zuma, Nonhlanhla; Padayatchi, Nesri; Barker, Pierre; Nunn, Andrew; Subrayen, Priashni; Abdool Karim, Salim S

    2017-11-13

    A large and compelling clinical evidence base has shown that integrated TB and HIV services leads to reduction in human immunodeficiency virus (HIV)- and tuberculosis (TB)-associated mortality and morbidity. Despite official policies and guidelines recommending TB and HIV care integration, its poor implementation has resulted in TB and HIV remaining the commonest causes of death in several countries in sub-Saharan Africa, including South Africa. This study aims to reduce mortality due to TB-HIV co-infection through a quality improvement strategy for scaling up of TB and HIV treatment integration in rural primary healthcare clinics in South Africa. The study is designed as an open-label cluster randomized controlled trial. Sixteen clinic supervisors who oversee 40 primary health care (PHC) clinics in two rural districts of KwaZulu-Natal, South Africa will be randomized to either the control group (provision of standard government guidance for TB-HIV integration) or the intervention group (provision of standard government guidance with active enhancement of TB-HIV care integration through a quality improvement approach). The primary outcome is all-cause mortality among TB-HIV patients. Secondary outcomes include time to antiretroviral therapy (ART) initiation among TB-HIV co-infected patients, as well as TB and HIV treatment outcomes at 12 months. In addition, factors that may affect the intervention, such as conditions in the clinic and staff availability, will be closely monitored and documented. This study has the potential to address the gap between the establishment of TB-HIV care integration policies and guidelines and their implementation in the provision of integrated care in PHC clinics. If successful, an evidence-based intervention comprising change ideas, tools, and approaches for quality improvement could inform the future rapid scale up, implementation, and sustainability of improved TB-HIV integration across sub-Sahara Africa and other resource-constrained settings. Clinicaltrials.gov, NCT02654613 . Registered 01 June 2015.

  18. On the effects of surrogacy of energy dissipation in determining the intermittency exponent in fully developed turbulence

    NASA Astrophysics Data System (ADS)

    Cleve, J.; Greiner, M.; Sreenivasan, K. R.

    2003-03-01

    The two-point correlation function of the energy dissipation, obtained from a one-point time record of an atmospheric boundary layer, reveals a rigorous power law scaling with intermittency exponent μ approx 0.20 over almost the entire inertial range of scales. However, for the related integral moment, the power law scaling is restricted to the upper part of the inertial range only. This observation is explained in terms of the operational surrogacy of the construction of energy dissipation, which influences the behaviour of the correlation function for small separation distances.

  19. Interactive graphical computer-aided design system

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1975-01-01

    System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.

  20. Power Grid Data Analysis with R and Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin

    This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.

  1. Analysis of DNA Sequences by an Optical ime-Integrating Correlator: Proposal

    DTIC Science & Technology

    1991-11-01

    CURRENT TECHNOLOGY 2 3.0 TIME-INTEGRATING CORRELATOR 2 4.0 REPRESENTATIONS OF THE DNA BASES 8 5.0 DNA ANALYSIS STRATEGY 8 6.0 STRATEGY FOR COARSE...1)-correlation peak formed by the AxB term and (2)-pedestal formed by the A + B terms. 7 Figure 4: Short representations of the DNA bases where each...linear scale. 15 x LIST OF TABLES PAGE Table 1: Short representations of the DNA bases where each base is represented by 7-bits long pseudorandom

  2. Inertial-Range Reconnection in Magnetohydrodynamic Turbulence and in the Solar Wind.

    PubMed

    Lalescu, Cristian C; Shi, Yi-Kang; Eyink, Gregory L; Drivas, Theodore D; Vishniac, Ethan T; Lazarian, Alexander

    2015-07-10

    In situ spacecraft data on the solar wind show events identified as magnetic reconnection with wide outflows and extended "X lines," 10(3)-10(4) times ion scales. To understand the role of turbulence at these scales, we make a case study of an inertial-range reconnection event in a magnetohydrodynamic simulation. We observe stochastic wandering of field lines in space, breakdown of standard magnetic flux freezing due to Richardson dispersion, and a broadened reconnection zone containing many current sheets. The coarse-grain magnetic geometry is like large-scale reconnection in the solar wind, however, with a hyperbolic flux tube or apparent X line extending over integral length scales.

  3. Do foreign exchange and equity markets co-move in Latin American region? Detrended cross-correlation approach

    NASA Astrophysics Data System (ADS)

    Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.

    2016-11-01

    This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.

  4. A user-defined data type for the storage of time series data allowing efficient similarity screening.

    PubMed

    Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor

    2012-07-16

    The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. ClearFuels-Rentech Integrated Biorefinery Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, Joshua

    The project Final Report describes the validation of the performance of the integration of two technologies that were proven individually on a pilot scale and were demonstrated as a pilot scale integrated biorefinery. The integrated technologies were a larger scale ClearFuels’ (CF) advanced flexible biomass to syngas thermochemical high efficiency hydrothermal reformer (HEHTR) technology with Rentech’s (RTK) existing synthetic gas to liquids (GTL) technology.

  6. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform.

    PubMed

    Marshall-Colon, Amy; Long, Stephen P; Allen, Douglas K; Allen, Gabrielle; Beard, Daniel A; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A J; Cox, Donna J; Hart, John C; Hirst, Peter M; Kannan, Kavya; Katz, Daniel S; Lynch, Jonathan P; Millar, Andrew J; Panneerselvam, Balaji; Price, Nathan D; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J; Voit, Eberhard O; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop.

  7. Crops In Silico: Generating Virtual Crops Using an Integrative and Multi-scale Modeling Platform

    PubMed Central

    Marshall-Colon, Amy; Long, Stephen P.; Allen, Douglas K.; Allen, Gabrielle; Beard, Daniel A.; Benes, Bedrich; von Caemmerer, Susanne; Christensen, A. J.; Cox, Donna J.; Hart, John C.; Hirst, Peter M.; Kannan, Kavya; Katz, Daniel S.; Lynch, Jonathan P.; Millar, Andrew J.; Panneerselvam, Balaji; Price, Nathan D.; Prusinkiewicz, Przemyslaw; Raila, David; Shekar, Rachel G.; Shrivastava, Stuti; Shukla, Diwakar; Srinivasan, Venkatraman; Stitt, Mark; Turk, Matthew J.; Voit, Eberhard O.; Wang, Yu; Yin, Xinyou; Zhu, Xin-Guang

    2017-01-01

    Multi-scale models can facilitate whole plant simulations by linking gene networks, protein synthesis, metabolic pathways, physiology, and growth. Whole plant models can be further integrated with ecosystem, weather, and climate models to predict how various interactions respond to environmental perturbations. These models have the potential to fill in missing mechanistic details and generate new hypotheses to prioritize directed engineering efforts. Outcomes will potentially accelerate improvement of crop yield, sustainability, and increase future food security. It is time for a paradigm shift in plant modeling, from largely isolated efforts to a connected community that takes advantage of advances in high performance computing and mechanistic understanding of plant processes. Tools for guiding future crop breeding and engineering, understanding the implications of discoveries at the molecular level for whole plant behavior, and improved prediction of plant and ecosystem responses to the environment are urgently needed. The purpose of this perspective is to introduce Crops in silico (cropsinsilico.org), an integrative and multi-scale modeling platform, as one solution that combines isolated modeling efforts toward the generation of virtual crops, which is open and accessible to the entire plant biology community. The major challenges involved both in the development and deployment of a shared, multi-scale modeling platform, which are summarized in this prospectus, were recently identified during the first Crops in silico Symposium and Workshop. PMID:28555150

  8. [Development of a scale to measure Korean ego-integrity in older adults].

    PubMed

    Chang, Sung Ok; Kong, Eun Sook; Kim, Kwuy Bun; Kim, Nam Cho; Kim, Ju Hee; Kim, Chun Gill; Kim, Hee Kyung; Song, Mi Soon; Ahn, Soo Yeon; Lee, Kyung Ja; Lee, Young Whee; Chon, Si Ja; Cho, Nam Ok; Cho, Myung Ok; Choi, Kyung Sook

    2007-04-01

    Ego-integrity in older adults is the central concept related to quality of life in later life. Therefore, for effective interventions to enhance the quality of later life, a scale to measure ego-integrity in older adults is necessary. This study was carried out to develop a scale to measure ego-integrity in older adults. This study utilized cronbach's alpha in analyzing the reliability of the collected data and expert group, and factor analysis and item analysis to analyze validity. Seventeen items were selected from a total of 21 items. Cronbach's alpha coefficient for internal consistency was .88 for the 17 items of ego-integrity in the older adults scale. Three factors evolved by factor analysis, which explained 50.71% of the total variance. The scale for measuring ego-integrity in Korean older adults in this study was evaluated as a tool with a high degree of reliability and validity.

  9. The Fermi-Pasta-Ulam Problem and Its Underlying Integrable Dynamics

    NASA Astrophysics Data System (ADS)

    Benettin, G.; Christodoulidi, H.; Ponno, A.

    2013-07-01

    This paper is devoted to a numerical study of the familiar α+ β FPU model. Precisely, we here discuss, revisit and combine together two main ideas on the subject: (i) In the system, at small specific energy ɛ= E/ N, two well separated time-scales are present: in the former one a kind of metastable state is produced, while in the second much larger one, such an intermediate state evolves and reaches statistical equilibrium. (ii) FPU should be interpreted as a perturbed Toda model, rather than (as is typical) as a linear model perturbed by nonlinear terms. In the view we here present and support, the former time scale is the one in which FPU is essentially integrable, its dynamics being almost indistinguishable from the Toda dynamics: the Toda actions stay constant for FPU too (while the usual linear normal modes do not), the angles fill their almost invariant torus, and nothing else happens. The second time scale is instead the one in which the Toda actions significantly evolve, and statistical equilibrium is possible. We study both FPU-like initial states, in which only a few degrees of freedom are excited, and generic initial states extracted randomly from an (approximated) microcanonical distribution. The study is based on a close comparison between the behavior of FPU and Toda in various situations. The main technical novelty is the study of the correlation functions of the Toda constants of motion in the FPU dynamics; such a study allows us to provide a good definition of the equilibrium time τ, i.e. of the second time scale, for generic initial data. Our investigation shows that τ is stable in the thermodynamic limit, i.e. the limit of large N at fixed ɛ, and that by reducing ɛ (ideally, the temperature), τ approximately grows following a power law τ˜ ɛ - a , with a=5/2.

  10. Experimental investigation of 4 K pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Gao, J. L.; Matsubara, Y.

    During the last decades superconducting electronics has been the most prominent area of research for small scale applications of superconductivity. It has experienced quite a stormy development, from individual low frequency devices to devices with high integration density and pico second switching time. Nowadays it offers small losses, high speed and the potential for large scale integration and is superior to semiconducting devices in many ways — apart from the need for cooling by liquid helium for devices based on classical superconductors like niobium, or cooling by liquid nitrogen or cryocoolers (40K to 77K) for high-T c superconductors like YBa 2Cu 3O 7. This article gives a short overview over the current state of the art on typical devices out of the main application areas of superconducting electronics.

  11. Integrating Systems Health Management with Adaptive Controls for a Utility-Scale Wind Turbine

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Goebel, Kai; Trinh, Khanh V.; Balas, Mark J.; Frost, Alan M.

    2011-01-01

    Increasing turbine up-time and reducing maintenance costs are key technology drivers for wind turbine operators. Components within wind turbines are subject to considerable stresses due to unpredictable environmental conditions resulting from rapidly changing local dynamics. Systems health management has the aim to assess the state-of-health of components within a wind turbine, to estimate remaining life, and to aid in autonomous decision-making to minimize damage. Advanced adaptive controls can provide the mechanism to enable optimized operations that also provide the enabling technology for Systems Health Management goals. The work reported herein explores the integration of condition monitoring of wind turbine blades with contingency management and adaptive controls. Results are demonstrated using a high fidelity simulator of a utility-scale wind turbine.

  12. Sediment dynamics over multiple time scales in Dyke Marsh Preserve (Potomac River, VA)

    NASA Astrophysics Data System (ADS)

    Palinkas, C. M.; Walters, D.

    2010-12-01

    Tidal freshwater marshes are critical components of fluvial and estuarine ecosystems, yet sediment dynamics within them have not received as much attention as their saltwater counterparts. This study examines sedimentation in Dyke Marsh Preserve, located on the Potomac River (VA), focusing on understanding the spatial variability present over multiple time scales. Bimonthly sediment data were collected using ceramic tiles, and seasonal- and decadal-scale sedimentation was determined via 7Be (half-life 53.3 days) and 210Pb (half-life 22.3 years), respectively. Results were also compared to SET data collected by the National Park Service since 2006. Preliminary data indicate that sites at lower elevations have higher sedimentation rates, likely related to their close proximity to the sediment source. Mass accumulation rates generally decreased with increasing time scale, such that the seasonal rates were greater than the SET-derived accretion rates, which were in turn greater than the decadal-scale rates. However, the bimonthly rates were the lowest observed, probably because the sampling period (May-October 2010) did not include the main depositional period of the year, which would be integrated by the other techniques.

  13. Majority logic gate for 3D magnetic computing.

    PubMed

    Eichwald, Irina; Breitkreutz, Stephan; Ziemys, Grazvydas; Csaba, György; Porod, Wolfgang; Becherer, Markus

    2014-08-22

    For decades now, microelectronic circuits have been exclusively built from transistors. An alternative way is to use nano-scaled magnets for the realization of digital circuits. This technology, known as nanomagnetic logic (NML), may offer significant improvements in terms of power consumption and integration densities. Further advantages of NML are: non-volatility, radiation hardness, and operation at room temperature. Recent research focuses on the three-dimensional (3D) integration of nanomagnets. Here we show, for the first time, a 3D programmable magnetic logic gate. Its computing operation is based on physically field-interacting nanometer-scaled magnets arranged in a 3D manner. The magnets possess a bistable magnetization state representing the Boolean logic states '0' and '1.' Magneto-optical and magnetic force microscopy measurements prove the correct operation of the gate over many computing cycles. Furthermore, micromagnetic simulations confirm the correct functionality of the gate even for a size in the nanometer-domain. The presented device demonstrates the potential of NML for three-dimensional digital computing, enabling the highest integration densities.

  14. Hybrid integration of III-V semiconductor lasers on silicon waveguides using optofluidic microbubble manipulation

    PubMed Central

    Jung, Youngho; Shim, Jaeho; Kwon, Kyungmook; You, Jong-Bum; Choi, Kyunghan; Yu, Kyoungsik

    2016-01-01

    Optofluidic manipulation mechanisms have been successfully applied to micro/nano-scale assembly and handling applications in biophysics, electronics, and photonics. Here, we extend the laser-based optofluidic microbubble manipulation technique to achieve hybrid integration of compound semiconductor microdisk lasers on the silicon photonic circuit platform. The microscale compound semiconductor block trapped on the microbubble surface can be precisely assembled on a desired position using photothermocapillary convective flows induced by focused laser beam illumination. Strong light absorption within the micro-scale compound semiconductor object allows real-time and on-demand microbubble generation. After the assembly process, we verify that electromagnetic radiation from the optically-pumped InGaAsP microdisk laser can be efficiently coupled to the single-mode silicon waveguide through vertical evanescent coupling. Our simple and accurate microbubble-based manipulation technique may provide a new pathway for realizing high precision fluidic assembly schemes for heterogeneously integrated photonic/electronic platforms as well as microelectromechanical systems. PMID:27431769

  15. Highly Uniform Carbon Nanotube Field-Effect Transistors and Medium Scale Integrated Circuits.

    PubMed

    Chen, Bingyan; Zhang, Panpan; Ding, Li; Han, Jie; Qiu, Song; Li, Qingwen; Zhang, Zhiyong; Peng, Lian-Mao

    2016-08-10

    Top-gated p-type field-effect transistors (FETs) have been fabricated in batch based on carbon nanotube (CNT) network thin films prepared from CNT solution and present high yield and highly uniform performance with small threshold voltage distribution with standard deviation of 34 mV. According to the property of FETs, various logical and arithmetical gates, shifters, and d-latch circuits were designed and demonstrated with rail-to-rail output. In particular, a 4-bit adder consisting of 140 p-type CNT FETs was demonstrated with higher packing density and lower supply voltage than other published integrated circuits based on CNT films, which indicates that CNT based integrated circuits can reach to medium scale. In addition, a 2-bit multiplier has been realized for the first time. Benefitted from the high uniformity and suitable threshold voltage of CNT FETs, all of the fabricated circuits based on CNT FETs can be driven by a single voltage as small as 2 V.

  16. Health care delivery update: Part 1. Trends: less and more integration, bundled services, rethinking IPAs.

    PubMed

    Ellwood, P M

    1988-01-01

    Vertical integration of national medical firms that contract with physicians has slowed dramatically. At the same time, several top-level group practices, taking advantage of reputations for excellence, are integrating vertically on a national or regional scale. A shift from buying well to actually managing medical care will separate the "prospective supermeds" that learned to collaborate with physicians from those that are attempting to manipulate them. In view of the budget deficit and the needs for long-term care, Congress is likely to espouse more drastic Part B cost-cutting measures such as a physician PPO or an indexed relative-value scale. An emerging feature in health care is the growing variety of prospective payment arrangements in which the price for various combination services is set in advance. To be truly competitive, medical care organizations will have to be more selective, choosing physicians because they are cooperative and economical and because they are capable practitioners.

  17. Challenges in scaling NLO generators to leadership computers

    NASA Astrophysics Data System (ADS)

    Benjamin, D.; Childers, JT; Hoeche, S.; LeCompte, T.; Uram, T.

    2017-10-01

    Exascale computing resources are roughly a decade away and will be capable of 100 times more computing than current supercomputers. In the last year, Energy Frontier experiments crossed a milestone of 100 million core-hours used at the Argonne Leadership Computing Facility, Oak Ridge Leadership Computing Facility, and NERSC. The Fortran-based leading-order parton generator called Alpgen was successfully scaled to millions of threads to achieve this level of usage on Mira. Sherpa and MadGraph are next-to-leading order generators used heavily by LHC experiments for simulation. Integration times for high-multiplicity or rare processes can take a week or more on standard Grid machines, even using all 16-cores. We will describe our ongoing work to scale the Sherpa generator to thousands of threads on leadership-class machines and reduce run-times to less than a day. This work allows the experiments to leverage large-scale parallel supercomputers for event generation today, freeing tens of millions of grid hours for other work, and paving the way for future applications (simulation, reconstruction) on these and future supercomputers.

  18. Hierarchical coarse-graining model for photosystem II including electron and excitation-energy transfer processes.

    PubMed

    Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi

    2014-03-01

    We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Transport and Lagrangian Statistics in Rotating Stratified Turbulence

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. L.

    2015-12-01

    Transport plays a crucial role in geophysical flows, both in theatmosphere and in the ocean. Transport in such flows is ultimatelycontrolled by small-scale turbulence, although the large scales arein geostrophic balance between pressure gradient, gravity and Coriolisforces. As a result of the seemingly random nature of the flow, singleparticles are dispersed by the flow and on time scales significantlylonger than the eddy turn-over time, they undergo a diffusive motionwhose diffusion coefficient is the integral of the velocity correlationfunction. On intermediate time scales, in homogeneous, isotropic turbuilence(HIT) the separation between particle pairs has been argued to grow withtime according to the Richardson law: <(Δ x)2(t)> ~ t3, with aproportionality constant that depends on the initial particleseparation. The description of the phenomena associated withthe dispersion of single particles, or of particle pairs, ultimatelyrests on relatively simple statistical properties of the flowvelocity transporting the particles, in particular on its temporalcorrelation function. In this work, we investigate particle dispersionin the anisotropic case of rotating stratified turbulence examining whetherthe dependence on initial particle separation differs from HIT,particularly in the presence of an inverse cascade.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  1. DEMONSTRATION OF A MULTI-SCALE INTEGRATED MONITORING AND ASSESSMENT IN NY/NJ HARBOR

    EPA Science Inventory

    The Clean Water Act (CWA) requires states and tribes to assess the overall quality of their waters (Sec 305(b)), determine whether that quality is changing over time, identify problem areas and management actions necessary to resolve those problems, and evaluate the effectiveness...

  2. An integrative view of phylogenetic comparative methods: connections to population genetics, community ecology, and paleobiology.

    PubMed

    Pennell, Matthew W; Harmon, Luke J

    2013-06-01

    Recent innovations in phylogenetic comparative methods (PCMs) have spurred a renaissance of research into the causes and consequences of large-scale patterns of biodiversity. In this paper, we review these advances. We also highlight the potential of comparative methods to integrate across fields and focus on three examples where such integration might be particularly valuable: quantitative genetics, community ecology, and paleobiology. We argue that PCMs will continue to be a key set of tools in evolutionary biology, shedding new light on how evolutionary processes have shaped patterns of biodiversity through deep time. © 2013 New York Academy of Sciences.

  3. Scaling and scale invariance of conservation laws in Reynolds transport theorem framework

    NASA Astrophysics Data System (ADS)

    Haltas, Ismail; Ulusoy, Suleyman

    2015-07-01

    Scale invariance is the case where the solution of a physical process at a specified time-space scale can be linearly related to the solution of the processes at another time-space scale. Recent studies investigated the scale invariance conditions of hydrodynamic processes by applying the one-parameter Lie scaling transformations to the governing equations of the processes. Scale invariance of a physical process is usually achieved under certain conditions on the scaling ratios of the variables and parameters involved in the process. The foundational axioms of hydrodynamics are the conservation laws, namely, conservation of mass, conservation of linear momentum, and conservation of energy from continuum mechanics. They are formulated using the Reynolds transport theorem. Conventionally, Reynolds transport theorem formulates the conservation equations in integral form. Yet, differential form of the conservation equations can also be derived for an infinitesimal control volume. In the formulation of the governing equation of a process, one or more than one of the conservation laws and, some times, a constitutive relation are combined together. Differential forms of the conservation equations are used in the governing partial differential equation of the processes. Therefore, differential conservation equations constitute the fundamentals of the governing equations of the hydrodynamic processes. Applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework instead of applying to the governing partial differential equations may lead to more fundamental conclusions on the scaling and scale invariance of the hydrodynamic processes. This study will investigate the scaling behavior and scale invariance conditions of the hydrodynamic processes by applying the one-parameter Lie scaling transformation to the conservation laws in the Reynolds transport theorem framework.

  4. Association of schizophrenia onset age and white matter integrity with treatment effect of D-cycloserine: a randomized placebo-controlled double-blind crossover study.

    PubMed

    Takiguchi, Kazuo; Uezato, Akihito; Itasaka, Michio; Atsuta, Hidenori; Narushima, Kenji; Yamamoto, Naoki; Kurumaji, Akeo; Tomita, Makoto; Oshima, Kazunari; Shoda, Kosaku; Tamaru, Mai; Nakataki, Masahito; Okazaki, Mitsutoshi; Ishiwata, Sayuri; Ishiwata, Yasuyoshi; Yasuhara, Masato; Arima, Kunimasa; Ohmori, Tetsuro; Nishikawa, Toru

    2017-07-12

    It has been reported that drugs which promote the N-Methyl-D-aspartate-type glutamate receptor function by stimulating the glycine modulatory site in the receptor improve negative symptoms and cognitive dysfunction in schizophrenia patients being treated with antipsychotic drugs. We performed a placebo-controlled double-blind crossover study involving 41 schizophrenia patients in which D-cycloserine 50 mg/day was added-on, and the influence of the onset age and association with white matter integrity on MR diffusion tensor imaging were investigated for the first time. The patients were evaluated using the Positive and Negative Syndrome Scale (PANSS), Scale for the Assessment of Negative Symptoms (SANS), Brief Assessment of Cognition in Schizophrenia (BACS), and other scales. D-cycloserine did not improve positive or negative symptoms or cognitive dysfunction in schizophrenia. The investigation in consideration of the onset age suggests that D-cycloserine may aggravate negative symptoms of early-onset schizophrenia. The better treatment effect of D-cycloserine on BACS was observed when the white matter integrity of the sagittal stratum/ cingulum/fornix stria terminalis/genu of corpus callosum/external capsule was higher, and the better treatment effect on PANSS general psychopathology (PANSS-G) was observed when the white matter integrity of the splenium of corpus callosum was higher. In contrast, the better treatment effect of D-cycloserine on PANSS-G and SANS-IV were observed when the white matter integrity of the posterior thalamic radiation (left) was lower. It was suggested that response to D-cycloserine is influenced by the onset age and white matter integrity. UMIN Clinical Trials Registry (number UMIN000000468 ). Registered 18 August 2006.

  5. Integrated modelling of crop production and nitrate leaching with the Daisy model.

    PubMed

    Manevski, Kiril; Børgesen, Christen D; Li, Xiaoxin; Andersen, Mathias N; Abrahamsen, Per; Hu, Chunsheng; Hansen, Søren

    2016-01-01

    An integrated modelling strategy was designed and applied to the Soil-Vegetation-Atmosphere Transfer model Daisy for simulation of crop production and nitrate leaching under pedo-climatic and agronomic environment different than that of model original parameterisation. The points of significance and caution in the strategy are: •Model preparation should include field data in detail due to the high complexity of the soil and the crop processes simulated with process-based model, and should reflect the study objectives. Inclusion of interactions between parameters in a sensitivity analysis results in better account for impacts on outputs of measured variables.•Model evaluation on several independent data sets increases robustness, at least on coarser time scales such as month or year. It produces a valuable platform for adaptation of the model to new crops or for the improvement of the existing parameters set. On daily time scale, validation for highly dynamic variables such as soil water transport remains challenging. •Model application is demonstrated with relevance for scientists and regional managers. The integrated modelling strategy is applicable for other process-based models similar to Daisy. It is envisaged that the strategy establishes model capability as a useful research/decision-making, and it increases knowledge transferability, reproducibility and traceability.

  6. Data assimilation experiments using the diffusive back and forth nudging for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Ruggiero, G. A.; Ourmières, Y.; Cosme, E.; Blum, J.; Auroux, D.; Verron, J.

    2014-07-01

    The Diffusive Back and Forth Nudging (DBFN) is an easy-to-implement iterative data assimilation method based on the well-known Nudging method. It consists in a sequence of forward and backward model integrations, within a given time window, both of them using a feedback term to the observations. Therefore in the DBFN, the Nudging asymptotic behavior is translated into an infinite number of iterations within a bounded time domain. In this method, the backward integration is carried out thanks to what is called backward model, which is basically the forward model with reversed time step sign. To maintain numeral stability the diffusion terms also have their sign reversed, giving a diffusive character to the algorithm. In this article the DBFN performance to control a primitive equation ocean model is investigated. In this kind of model non-resolved scales are modeled by diffusion operators which dissipate energy that cascade from large to small scales. Thus, in this article the DBFN approximations and their consequences on the data assimilation system set-up are analyzed. Our main result is that the DBFN may provide results which are comparable to those produced by a 4Dvar implementation with a much simpler implementation and a shorter CPU time for convergence.

  7. Advancing parabolic operators in thermodynamic MHD models: Explicit super time-stepping versus implicit schemes with Krylov solvers

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.

    2017-05-01

    We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.

  8. The Plant Phenology Ontology: A New Informatics Resource for Large-Scale Integration of Plant Phenology Data.

    PubMed

    Stucky, Brian J; Guralnick, Rob; Deck, John; Denny, Ellen G; Bolmgren, Kjell; Walls, Ramona

    2018-01-01

    Plant phenology - the timing of plant life-cycle events, such as flowering or leafing out - plays a fundamental role in the functioning of terrestrial ecosystems, including human agricultural systems. Because plant phenology is often linked with climatic variables, there is widespread interest in developing a deeper understanding of global plant phenology patterns and trends. Although phenology data from around the world are currently available, truly global analyses of plant phenology have so far been difficult because the organizations producing large-scale phenology data are using non-standardized terminologies and metrics during data collection and data processing. To address this problem, we have developed the Plant Phenology Ontology (PPO). The PPO provides the standardized vocabulary and semantic framework that is needed for large-scale integration of heterogeneous plant phenology data. Here, we describe the PPO, and we also report preliminary results of using the PPO and a new data processing pipeline to build a large dataset of phenology information from North America and Europe.

  9. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  10. Anomalous diffusion and scaling in coupled stochastic processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bel, Golan; Nemenman, Ilya

    2009-01-01

    Inspired by problems in biochemical kinetics, we study statistical properties of an overdamped Langevin processes with the friction coefficient depending on the state of a similar, unobserved, process. Integrating out the latter, we derive the Pocker-Planck the friction coefficient of the first depends on the state of the second. Integrating out the latter, we derive the Focker-Planck equation for the probability distribution of the former. This has the fonn of diffusion equation with time-dependent diffusion coefficient, resulting in an anomalous diffusion. The diffusion exponent can not be predicted using a simple scaling argument, and anomalous scaling appears as well. Themore » diffusion exponent of the Weiss-Havlin comb model is derived as a special case, and the same exponent holds even for weakly coupled processes. We compare our theoretical predictions with numerical simulations and find an excellent agreement. The findings caution against treating biochemical systems with unobserved dynamical degrees of freedom by means of standandard, diffusive Langevin descritpion.« less

  11. A 72 × 60 Angle-Sensitive SPAD Imaging Array for Lens-less FLIM.

    PubMed

    Lee, Changhyuk; Johnson, Ben; Jung, TaeSung; Molnar, Alyosha

    2016-09-02

    We present a 72 × 60, angle-sensitive single photon avalanche diode (A-SPAD) array for lens-less 3D fluorescence lifetime imaging. An A-SPAD pixel consists of (1) a SPAD to provide precise photon arrival time where a time-resolved operation is utilized to avoid stimulus-induced saturation, and (2) integrated diffraction gratings on top of the SPAD to extract incident angles of the incoming light. The combination enables mapping of fluorescent sources with different lifetimes in 3D space down to micrometer scale. Futhermore, the chip presented herein integrates pixel-level counters to reduce output data-rate and to enable a precise timing control. The array is implemented in standard 180 nm complementary metal-oxide-semiconductor (CMOS) technology and characterized without any post-processing.

  12. A 72 × 60 Angle-Sensitive SPAD Imaging Array for Lens-less FLIM

    PubMed Central

    Lee, Changhyuk; Johnson, Ben; Jung, TaeSung; Molnar, Alyosha

    2016-01-01

    We present a 72 × 60, angle-sensitive single photon avalanche diode (A-SPAD) array for lens-less 3D fluorescence lifetime imaging. An A-SPAD pixel consists of (1) a SPAD to provide precise photon arrival time where a time-resolved operation is utilized to avoid stimulus-induced saturation, and (2) integrated diffraction gratings on top of the SPAD to extract incident angles of the incoming light. The combination enables mapping of fluorescent sources with different lifetimes in 3D space down to micrometer scale. Futhermore, the chip presented herein integrates pixel-level counters to reduce output data-rate and to enable a precise timing control. The array is implemented in standard 180 nm complementary metal-oxide-semiconductor (CMOS) technology and characterized without any post-processing. PMID:27598170

  13. Chip-scale sensor system integration for portable health monitoring.

    PubMed

    Jokerst, Nan M; Brooke, Martin A; Cho, Sang-Yeon; Shang, Allan B

    2007-12-01

    The revolution in integrated circuits over the past 50 yr has produced inexpensive computing and communications systems that are powerful and portable. The technologies for these integrated chip-scale sensing systems, which will be miniature, lightweight, and portable, are emerging with the integration of sensors with electronics, optical systems, micromachines, microfluidics, and the integration of chemical and biological materials (soft/wet material integration with traditional dry/hard semiconductor materials). Hence, we stand at a threshold for health monitoring technology that promises to provide wearable biochemical sensing systems that are comfortable, inauspicious, wireless, and battery-operated, yet that continuously monitor health status, and can transmit compressed data signals at regular intervals, or alarm conditions immediately. In this paper, we explore recent results in chip-scale sensor integration technology for health monitoring. The development of inexpensive chip-scale biochemical optical sensors, such as microresonators, that are customizable for high sensitivity coupled with rapid prototyping will be discussed. Ground-breaking work in the integration of chip-scale optical systems to support these optical sensors will be highlighted, and the development of inexpensive Si complementary metal-oxide semiconductor circuitry (which makes up the vast majority of computational systems today) for signal processing and wireless communication with local receivers that lie directly on the chip-scale sensor head itself will be examined.

  14. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.

  15. Compulsive use of social networking sites in Belgium: prevalence, profile, and the role of attitude toward work and school.

    PubMed

    De Cock, Rozane; Vangeel, Jolien; Klein, Annabelle; Minotte, Pascal; Rosas, Omar; Meerkerk, Gert-Jan

    2014-03-01

    A representative sample (n=1,000) of the Belgian population aged 18 years and older filled out an online questionnaire on their Internet use in general and their use of social networking sites (SNS) in particular. We measured total time spent on the Internet, time spent on SNS, number of SNS profiles, gender, age, schooling level, income, job occupation, and leisure activities, and we integrated several psychological scales such as the Quick Big Five and the Mastery Scale. Hierarchical multiple regression modeling shows that gender and age explain an important part of the compulsive SNS score (5%) as well as psychological scales (20%), but attitude toward school (additional 3%) and income (2.5%) also add to explained variance in predictive models of compulsive SNS use.

  16. Improving the integration of recreation management with management of other natural resources by applying concepts of scale from ecology.

    PubMed

    Morse, Wayde C; Hall, Troy E; Kruger, Linda E

    2009-03-01

    In this article, we examine how issues of scale affect the integration of recreation management with the management of other natural resources on public lands. We present two theories used to address scale issues in ecology and explore how they can improve the two most widely applied recreation-planning frameworks. The theory of patch dynamics and hierarchy theory are applied to the recreation opportunity spectrum (ROS) and the limits of acceptable change (LAC) recreation-planning frameworks. These frameworks have been widely adopted internationally, and improving their ability to integrate with other aspects of natural resource management has significant social and conservation implications. We propose that incorporating ecologic criteria and scale concepts into these recreation-planning frameworks will improve the foundation for integrated land management by resolving issues of incongruent boundaries, mismatched scales, and multiple-scale analysis. Specifically, we argue that whereas the spatially explicit process of the ROS facilitates integrated decision making, its lack of ecologic criteria, broad extent, and large patch size decrease its usefulness for integration at finer scales. The LAC provides explicit considerations for weighing competing values, but measurement of recreation disturbances within an LAC analysis is often done at too fine a grain and at too narrow an extent for integration with other recreation and resource concerns. We suggest that planners should perform analysis at multiple scales when making management decisions that involve trade-offs among competing values. The United States Forest Service is used as an example to discuss how resource-management agencies can improve this integration.

  17. Simultaneous temporally resolved DPIV and pressure measurements of symmetric oscillations in a scaled-up vocal fold model

    NASA Astrophysics Data System (ADS)

    Ringenberg, Hunter; Rogers, Dylan; Wei, Nathaniel; Krane, Michael; Wei, Timothy

    2017-11-01

    The objective of this study is to apply experimental data to theoretical framework of Krane (2013) in which the principal aeroacoustic source is expressed in terms of vocal fold drag, glottal jet dynamic head, and glottal exit volume flow, reconciling formal theoretical aeroacoustic descriptions of phonation with more traditional lumped-element descriptions. These quantities appear in the integral equations of motion for phonatory flow. In this way time resolved velocity field measurements can be used to compute time-resolved estimates of the relevant terms in the integral equations of motion, including phonation aeroacoustic source strength. A simplified 10x scale vocal fold model from Krane, et al. (2007) was used to examine symmetric, i.e. `healthy', oscillatory motion of the vocal folds. By using water as the working fluid, very high spatial and temporal resolution was achieved. Temporal variation of transglottal pressure was simultaneously measured with flow on the vocal fold model mid-height. Experiments were dynamically scaled to examine a range of frequencies corresponding to male and female voice. The simultaneity of the pressure and flow provides new insights into the aeroacoustics associated with vocal fold oscillations. Supported by NIH Grant No. 2R01 DC005642-11.

  18. Fully-Integrated Simulation of Conjunctive Use from Field to Basin Scales: Development of a Surface Water Operations Module for MODFLOW-OWHM

    NASA Astrophysics Data System (ADS)

    Ferguson, I. M.; Boyce, S. E.; Hanson, R. T.; Llewellyn, D.

    2014-12-01

    It is well established that groundwater pumping affects surface-water availability by intercepting groundwater that would otherwise discharge to streams and/or by increasing seepage from surface-water channels. Conversely, surface-water management operations effect groundwater availability by altering the timing, location, and quantity of groundwater recharge and demand. Successful conjunctive use may require analysis with an integrated approach that accounts for the many interactions and feedbacks between surface-water and groundwater availability and their joint management. In order to improve simulation and analysis of conjunctive use, Bureau of Reclamation and USGS are collaborating to develop a surface-water operations module within MODFLOW One Water Hydrologic Flow Model (MF-OWHM), a new version of the USGS Modular Groundwater Flow Model (MODFLOW). Here we describe the development and application of the surface-water operations module. We provide an overview of the conceptual approach used to simulate surface-water operations—including surface-water storage, allocation, release, diversion, and delivery on monthly to seasonal time frames—in a fully-integrated manner. We then present results from a recent case study analysis of the Rio Grande Project, a large-scale irrigation project located in New Mexico and Texas, under varying surface-water operations criteria and climate conditions. Case study results demonstrate the importance of integrated hydrologic simulation of surface water and groundwater operations in analysis and management of conjunctive-use systems.

  19. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  20. An Integrated Approach for Urban Earthquake Vulnerability Analyses

    NASA Astrophysics Data System (ADS)

    Düzgün, H. S.; Yücemen, M. S.; Kalaycioglu, H. S.

    2009-04-01

    The earthquake risk for an urban area has increased over the years due to the increasing complexities in urban environments. The main reasons are the location of major cities in hazard prone areas, growth in urbanization and population and rising wealth measures. In recent years physical examples of these factors are observed through the growing costs of major disasters in urban areas which have stimulated a demand for in-depth evaluation of possible strategies to manage the large scale damaging effects of earthquakes. Understanding and formulation of urban earthquake risk requires consideration of a wide range of risk aspects, which can be handled by developing an integrated approach. In such an integrated approach, an interdisciplinary view should be incorporated into the risk assessment. Risk assessment for an urban area requires prediction of vulnerabilities related to elements at risk in the urban area and integration of individual vulnerability assessments. However, due to complex nature of an urban environment, estimating vulnerabilities and integrating them necessities development of integrated approaches in which vulnerabilities of social, economical, structural (building stock and infrastructure), cultural and historical heritage are estimated for a given urban area over a given time period. In this study an integrated urban earthquake vulnerability assessment framework, which considers vulnerability of urban environment in a holistic manner and performs the vulnerability assessment for the smallest administrative unit, namely at neighborhood scale, is proposed. The main motivation behind this approach is the inability to implement existing vulnerability assessment methodologies for countries like Turkey, where the required data are usually missing or inadequate and decision makers seek for prioritization of their limited resources in risk reduction in the administrative districts from which they are responsible. The methodology integrates socio-economical, structural, coastal, ground condition, organizational vulnerabilities, as well as accessibility to critical services within the framework. The proposed framework has the following eight components: Seismic hazard analysis, soil response analysis, tsunami inundation analysis, structural vulnerability analysis, socio-economic vulnerability analysis, accessibility to critical services, GIS-based integrated vulnerability assessment, and visualization of vulnerabilities in 3D virtual city model The integrated model for various vulnerabilities obtained for the urban area is developed in GIS environment by using individual vulnerability assessments for considered elements at risk and serve for establishing the backbone of the spatial decision support system. The stages followed in the model are: Determination of a common mapping unit for each aspect of urban earthquake vulnerability, formation of a geo-database for the vulnerabilities, evaluation of urban vulnerability based on multi attribute utility theory with various weighting algorithms, mapping of the evaluated integrated earthquake risk in geographic information systems (GIS) in the neighborhood scale. The framework is also applicable to larger geographical mapping scales, for example, the building scale. When illustrating the results in building scale, 3-D visualizations with remote sensing data is used so that decision-makers can easily interpret the outputs. The proposed vulnerability assessment framework is flexible and can easily be applied to urban environments at various geographical scales with different mapping units. The obtained total vulnerability maps for the urban area provide a baseline for the development of risk reduction strategies for the decision makers. Moreover, as several aspects of elements at risk for an urban area is considered through vulnerability analyses, effect on changes in vulnerability conditions on the total can easily be determined. The developed approach also enables decision makers to monitor temporal and spatial changes in the urban environment due to implementation of risk reduction strategies.

  1. Regional climate model sensitivity to domain size

    NASA Astrophysics Data System (ADS)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  2. The Impact of In-situ Chemical Oxidation on Contaminant Mass Discharge: Linking Source-Zone and Plume-Scale Characterizations of Remediation Performance

    NASA Astrophysics Data System (ADS)

    Brusseau, M. L.; Carroll, K. C.; Baker, J. B.; Allen, T.; DiGuiseppi, W.; Hatton, J.; Morrison, C.; Russo, A. E.; Berkompas, J. L.

    2011-12-01

    A large-scale permanganate-based in-situ chemical oxidation (ISCO) effort has been conducted over the past ten years at a federal Superfund site in Tucson, AZ, for which trichloroethene (TCE) is the primary contaminant of concern. Remediation performance was assessed by examining the impact of treatment on contaminant mass discharge, an approach that has been used for only a very few prior ISCO projects. Contaminant mass discharge tests were conducted before and after permanganate injection to measure the impact at the source-zone scale. The results indicate that ISCO caused a significant reduction in mass discharge (approximately 75%). The standard approach of characterizing discharge at the source-zone scale was supplemented with additional characterization at the plume scale, which was evaluated by examining the change in contaminant mass discharge associated with the pump-and-treat system. The integrated contaminant mass discharge decreased by approximately 70%, consistent with the source-zone-scale measurements. The integrated mass discharge rebounded from 0.1 to 0.2 Kg/d within one year after cessation of permanganate injections, after which it has been stable for several years. Collection of the integrated contaminant mass discharge data throughout the ISCO treatment period provided a high-resolution, real-time analysis of the site-wide impact of ISCO, thereby linking source-zone remediation to impacts on overall risk. The results indicate that ISCO was successful in reducing contaminant mass discharge at this site, which comprises a highly heterogeneous subsurface environment. Analysis of TCE sediment concentration data for core material collected before and after ISCO supports the hypothesis that the remaining mass discharge is associated in part with poorly-accessible contaminant mass residing within lower-permeability zones.

  3. Scaling up paediatric HIV care with an integrated, family-centred approach: an observational case study from Uganda.

    PubMed

    Luyirika, Emmanuel; Towle, Megan S; Achan, Joyce; Muhangi, Justus; Senyimba, Catherine; Lule, Frank; Muhe, Lulu

    2013-01-01

    Family-centred HIV care models have emerged as an approach to better target children and their caregivers for HIV testing and care, and further provide integrated health services for the family unit's range of care needs. While there is significant international interest in family-centred approaches, there is a dearth of research on operational experiences in implementation and scale-up. Our retrospective case study examined best practices and enabling factors during scale-up of family-centred care in ten health facilities and ten community clinics supported by a non-governmental organization, Mildmay, in Central Uganda. Methods included key informant interviews with programme management and families, and a desk review of hospital management information systems (HMIS) uptake data. In the 84 months following the scale-up of the family-centred approach in HIV care, Mildmay experienced a 50-fold increase of family units registered in HIV care, a 40-fold increase of children enrolled in HIV care, and nearly universal coverage of paediatric cotrimoxazole prophylaxis. The Mildmay experience emphasizes the importance of streamlining care to maximize paediatric capture. This includes integrated service provision, incentivizing care-seeking as a family, creating child-friendly service environments, and minimizing missed paediatric testing opportunities by institutionalizing early infant diagnosis and provider-initiated testing and counselling. Task-shifting towards nurse-led clinics with community outreach support enabled rapid scale-up, as did an active management structure that allowed for real-time review and corrective action. The Mildmay experience suggests that family-centred approaches are operationally feasible, produce strong coverage outcomes, and can be well-managed during rapid scale-up.

  4. Impact of in situ chemical oxidation on contaminant mass discharge: linking source-zone and plume-scale characterizations of remediation performance.

    PubMed

    Brusseau, M L; Carroll, K C; Allen, T; Baker, J; Diguiseppi, W; Hatton, J; Morrison, C; Russo, A; Berkompas, J

    2011-06-15

    A large-scale permanganate-based in situ chemical oxidation (ISCO) effort has been conducted over the past ten years at a federal Superfund site in Tucson, AZ, for which trichloroethene (TCE) is the primary contaminant of concern. Remediation performance was assessed by examining the impact of treatment on contaminant mass discharge, an approach that has been used for only a very few prior ISCO projects. Contaminant mass discharge tests were conducted before and after permanganate injection to measure the impact at the source-zone scale. The results indicate that ISCO caused a significant reduction in mass discharge (approximately 75%). The standard approach of characterizing discharge at the source-zone scale was supplemented with additional characterization at the plume scale, which was evaluated by examining the change in contaminant mass discharge associated with the pump-and-treat system. The integrated contaminant mass discharge decreased by approximately 70%, consistent with the source-zone-scale measurements. The integrated mass discharge rebounded from 0.1 to 0.2 kg/d within one year after cessation of permanganate injections, after which it has been stable for several years. Collection of the integrated contaminant mass discharge data throughout the ISCO treatment period provided a high-resolution, real-time analysis of the site-wide impact of ISCO, thereby linking source-zone remediation to impacts on overall risk. The results indicate that ISCO was successful in reducing contaminant mass discharge at this site, which comprises a highly heterogeneous subsurface environment. Analysis of TCE sediment concentration data for core material collected before and after ISCO supports the hypothesis that the remaining mass discharge is associated in part with poorly accessible contaminant mass residing within lower-permeability zones.

  5. Resampling to accelerate cross-correlation searches for continuous gravitational waves from binary systems

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Krishnan, Badri; Papa, Maria Alessandra; Whelan, John T.; Zhang, Yuanhao

    2018-02-01

    Continuous-wave (CW) gravitational waves (GWs) call for computationally-intensive methods. Low signal-to-noise ratio signals need templated searches with long coherent integration times and thus fine parameter-space resolution. Longer integration increases sensitivity. Low-mass x-ray binaries (LMXBs) such as Scorpius X-1 (Sco X-1) may emit accretion-driven CWs at strains reachable by current ground-based observatories. Binary orbital parameters induce phase modulation. This paper describes how resampling corrects binary and detector motion, yielding source-frame time series used for cross-correlation. Compared to the previous, detector-frame, templated cross-correlation method, used for Sco X-1 on data from the first Advanced LIGO observing run (O1), resampling is about 20 × faster in the costliest, most-sensitive frequency bands. Speed-up factors depend on integration time and search setup. The speed could be reinvested into longer integration with a forecast sensitivity gain, 20 to 125 Hz median, of approximately 51%, or from 20 to 250 Hz, 11%, given the same per-band cost and setup. This paper's timing model enables future setup optimization. Resampling scales well with longer integration, and at 10 × unoptimized cost could reach respectively 2.83 × and 2.75 × median sensitivities, limited by spin-wandering. Then an O1 search could yield a marginalized-polarization upper limit reaching torque-balance at 100 Hz. Frequencies from 40 to 140 Hz might be probed in equal observing time with 2 × improved detectors.

  6. Concentration-discharge relationships to understand the interplay between hydrological and biogeochemical processes: insights from data analysis and numerical experiments in headwater catchments.

    NASA Astrophysics Data System (ADS)

    De Dreuzy, J. R.; Marçais, J.; Moatar, F.; Minaudo, C.; Courtois, Q.; Thomas, Z.; Longuevergne, L.; Pinay, G.

    2017-12-01

    Integration of hydrological and biogeochemical processes led to emerging patterns at the catchment scale. Monitoring in rivers reflects the aggregation of these effects. While discharge time series have been measured for decades, high frequency water quality monitoring in rivers now provides prominent measurements to characterize the interplay between hydrological and biogeochemical processes, especially to infer the processes that happen in the heterogeneous subsurface. However, we still lack frameworks to relate observed patterns to specific processes, because of the "organized complexity" of hydrological systems. Indeed, it is unclear what controls, for example, patterns in concentration-discharge (C/Q) relationships due to non-linear processes and hysteresis effects. Here we develop a non-intensive process-based model to test how the integration of different landforms (i.e. geological heterogeneities and structures, topographical features) with different biogeochemical reactivity assumptions (e.g. reactive zone locations) can shape the overall water quality time series. With numerical experiments, we investigate typical patterns in high frequency C/Q relationships. In headwater basins, we found that typical hysteretic patterns in C/Q relationships observed in data time series can be attributed to differences in water and solute locations stored across the hillslope. At the catchment scale though, these effects tend to average out by integrating contrasted hillslopes' landforms. Together these results suggest that information contained in headwater water quality monitoring can be used to understand how hydrochemical processes determine downstream conditions.

  7. The impact of HIV/SRH service integration on workload: analysis from the Integra Initiative in two African settings

    PubMed Central

    2014-01-01

    Background There is growing interest in integration of HIV and sexual and reproductive health (SRH) services as a way to improve the efficiency of human resources (HR) for health in low- and middle-income countries. Although this is supported by a wealth of evidence on the acceptability and clinical effectiveness of service integration, there is little evidence on whether staff in general health services can easily absorb HIV services. Methods We conducted a descriptive analysis of HR integration through task shifting/sharing and staff workload in the context of the Integra Initiative - a large-scale five-year evaluation of HIV/SRH integration. We describe the level, characteristics and changes in HR integration in the context of wider efforts to integrate HIV/SRH, and explore the impact of HR integration on staff workload. Results Improvements in the range of services provided by staff (HR integration) were more likely to be achieved in facilities which also improved other elements of integration. While there was no overall relationship between integration and workload at the facility level, HIV/SRH integration may be most influential on staff workload for provider-initiated HIV testing and counselling (PITC) and postnatal care (PNC) services, particularly where HIV care and treatment services are being supported with extra SRH/HIV staffing. Our findings therefore suggest that there may be potential for further efficiency gains through integration, but overall the pace of improvement is slow. Conclusions This descriptive analysis explores the effect of HIV/SRH integration on staff workload through economies of scale and scope in high- and medium-HIV prevalence settings. We find some evidence to suggest that there is potential to improve productivity through integration, but, at the same time, significant challenges are being faced, with the pace of productivity gain slow. We recommend that efforts to implement integration are assessed in the broader context of HR planning to ensure that neither staff nor patients are negatively impacted by integration policy. PMID:25103923

  8. The impact of HIV/SRH service integration on workload: analysis from the Integra Initiative in two African settings.

    PubMed

    Sweeney, Sedona; Obure, Carol Dayo; Terris-Prestholt, Fern; Darsamo, Vanessa; Michaels-Igbokwe, Christine; Muketo, Esther; Nhlabatsi, Zelda; Warren, Charlotte; Mayhew, Susannah; Watts, Charlotte; Vassall, Anna

    2014-08-07

    There is growing interest in integration of HIV and sexual and reproductive health (SRH) services as a way to improve the efficiency of human resources (HR) for health in low- and middle-income countries. Although this is supported by a wealth of evidence on the acceptability and clinical effectiveness of service integration, there is little evidence on whether staff in general health services can easily absorb HIV services. We conducted a descriptive analysis of HR integration through task shifting/sharing and staff workload in the context of the Integra Initiative - a large-scale five-year evaluation of HIV/SRH integration. We describe the level, characteristics and changes in HR integration in the context of wider efforts to integrate HIV/SRH, and explore the impact of HR integration on staff workload. Improvements in the range of services provided by staff (HR integration) were more likely to be achieved in facilities which also improved other elements of integration. While there was no overall relationship between integration and workload at the facility level, HIV/SRH integration may be most influential on staff workload for provider-initiated HIV testing and counselling (PITC) and postnatal care (PNC) services, particularly where HIV care and treatment services are being supported with extra SRH/HIV staffing. Our findings therefore suggest that there may be potential for further efficiency gains through integration, but overall the pace of improvement is slow. This descriptive analysis explores the effect of HIV/SRH integration on staff workload through economies of scale and scope in high- and medium-HIV prevalence settings. We find some evidence to suggest that there is potential to improve productivity through integration, but, at the same time, significant challenges are being faced, with the pace of productivity gain slow. We recommend that efforts to implement integration are assessed in the broader context of HR planning to ensure that neither staff nor patients are negatively impacted by integration policy.

  9. InfoSymbiotics/DDDAS - The power of Dynamic Data Driven Applications Systems for New Capabilities in Environmental -, Geo-, and Space- Sciences

    NASA Astrophysics Data System (ADS)

    Darema, F.

    2016-12-01

    InfoSymbiotics/DDDAS embodies the power of Dynamic Data Driven Applications Systems (DDDAS), a concept whereby an executing application model is dynamically integrated, in a feed-back loop, with the real-time data-acquisition and control components, as well as other data sources of the application system. Advanced capabilities can be created through such new computational approaches in modeling and simulations, and in instrumentation methods, and include: enhancing the accuracy of the application model; speeding-up the computation to allow faster and more comprehensive models of a system, and create decision support systems with the accuracy of full-scale simulations; in addition, the notion of controlling instrumentation processes by the executing application results in more efficient management of application-data and addresses challenges of how to architect and dynamically manage large sets of heterogeneous sensors and controllers, an advance over the static and ad-hoc ways of today - with DDDAS these sets of resources can be managed adaptively and in optimized ways. Large-Scale-Dynamic-Data encompasses the next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where opportunities and challenges at these "large-scales" relate not only to data size but the heterogeneity in data, data collection modalities, fidelities, and timescales, ranging from real-time data to archival data. In tandem with this important dimension of dynamic data, there is an extended view of Big Computing, which includes the collective computing by networked assemblies of multitudes of sensors and controllers, this range from the high-end to the real-time seamlessly integrated and unified, and comprising the Large-Scale-Big-Computing. InfoSymbiotics/DDDAS engenders transformative impact in many application domains, ranging from the nano-scale to the terra-scale and to the extra-terra-scale. The talk will address opportunities for new capabilities together with corresponding research challenges, with illustrative examples from several application areas including environmental sciences, geosciences, and space sciences.

  10. Role of "the frame cycle time" in portal dose imaging using an aS500-II EPID.

    PubMed

    Al Kattar Elbalaa, Zeina; Foulquier, Jean Noel; Orthuon, Alexandre; Elbalaa, Hanna; Touboul, Emmanuel

    2009-09-01

    This paper evaluates the role of an acquisition parameter, the frame cycle time "FCT", in the performance of an aS500-II EPID. The work presented rests on the study of the Varian EPID aS500-II and the image acquisition system 3 (IAS3). We are interested in integrated acquisition using asynchronous mode. For better understanding the image acquisition operation, we investigated the influence of the "frame cycle time" on the speed of acquisition, the pixel value of the averaged gray-scale frame and the noise, using 6 and 15MV X-ray beams and dose rates of 1-6Gy/min on 2100 C/D Linacs. In the integrated mode not synchronized to beam pulses, only one parameter the frame cycle time "FCT" influences the pixel value. The pixel value of the averaged gray-scale frame is proportional to this parameter. When the FCT <55ms (speed of acquisition V(f/s)>18 frames/s), the speed of acquisition becomes unstable and leads to a fluctuation of the portal dose response. A timing instability and saturation are detected when the dose per frame exceeds 1.53MU/frame. Rules were deduced to avoid saturation and to optimize this dosimetric mode. The choice of the acquisition parameter is essential for the accurate portal dose imaging.

  11. Integrated spatiotemporal characterization of dust sources and outbreaks in Central and East Asia

    NASA Astrophysics Data System (ADS)

    Darmenova, Kremena T.

    The potential of atmospheric dust aerosols to modify the Earth's environment and climate has been recognized for some time. However, predicting the diverse impact of dust has several significant challenges. One is to quantify the complex spatial and temporal variability of dust burden in the atmosphere. Another is to quantify the fraction of dust originating from human-made sources. This thesis focuses on the spatiotemporal characterization of sources and dust outbreaks in Central and East Asia by integrating ground-based data, satellite multisensor observations, and modeling. A new regional dust modeling system capable of operating over a span of scales was developed. The modeling system consists of a dust module DuMo, which incorporates several dust emission schemes of different complexity, and the PSU/NCAR mesoscale model MM5, which offers a variety of physical parameterizations and flexible nesting capability. The modeling system was used to perform for the first time a comprehensive study of the timing, duration, and intensity of individual dust events in Central and East Asia. Determining the uncertainties caused by the choice of model physics, especially the boundary layer parameterization, and the dust production scheme was the focus of our study. Implications to assessments of the anthropogenic dust fraction in these regions were also addressed. Focusing on Spring 2001, an analysis of routine surface meteorological observations and satellite multi-sensor data was carried out in conjunction with modeling to determine the extent to which integrated data set can be used to characterize the spatiotemporal distribution of dust plumes at a range of temporal scales, addressing the active dust sources in China and Mongolia, mid-range transport and trans-Pacific, long-range transport of dust outbreaks on a case-by-case basis. This work demonstrates that adequate and consistent characterization of individual dust events is central to establishing a reliable climatology, ultimately leading to improved assessments of dust impacts on the environment and climate. This will also help to identify the appropriate temporal and spatial scales for adequate intercomparison between model results and observational data as well as for developing an integrated analysis methodology for dust studies.

  12. Integration and Typologies of Vulnerability to Climate Change: A Case Study from Australian Wheat Sheep Zones

    PubMed Central

    Huai, Jianjun

    2016-01-01

    Although the integrated indicator methods have become popular for assessing vulnerability to climate change, their proliferation has introduced a confusing array of scales and indicators that cause a science-policy gap. I argue for a clear adaptation pathway in an “integrative typology” of regional vulnerability that matches appropriate scales, optimal measurements and adaptive strategies in a six-dimensional and multi-level analysis framework of integration and typology inspired by the “5W1H” questions: “Who is concerned about how to adapt to the vulnerability of what to what in some place (where) at some time (when)?” Using the case of the vulnerability of wheat, barley and oats to drought in Australian wheat sheep zones during 1978–1999, I answer the “5W1H” questions through establishing the “six typologies” framework. I then optimize the measurement of vulnerability through contrasting twelve kinds of vulnerability scores with the divergence of crops yields from their regional mean. Through identifying the socioeconomic constraints, I propose seven generic types of crop-drought vulnerability and local adaptive strategy. Our results illustrate that the process of assessing vulnerability and selecting adaptations can be enhanced using a combination of integration, optimization and typology, which emphasize dynamic transitions and transformations between integration and typology. PMID:27670975

  13. Timing Studies of X Persei and the Discovery of Its Transient Quasi-periodic Oscillation Feature

    NASA Technical Reports Server (NTRS)

    Acuner, Z.; Inam,S. C.; Sahiner, S.; Serim, M. M.; Baykal, A.; Swank, J.

    2014-01-01

    We present a timing analysis of X Persei (X Per) using observations made between 1998 and 2010 with the Proportional Counter Array (PCA) onboard the Rossi X-ray Timing Explorer (RXTE) and with the INTEGRAL Soft Gamma-Ray Imager (ISGRI). All pulse arrival times obtained from the RXTE-PCA observations are phase-connected and a timing solution is obtained using these arrival times. We update the long-term pulse frequency history of the source by measuring its pulse frequencies using RXTE-PCA and ISGRI data. From the RXTEPCA data, the relation between the frequency derivative and X-ray flux suggests accretion via the companion's stellar wind. However, the detection of a transient quasi-periodic oscillation feature, peaking at approximately 0.2 Hz, suggests the existence of an accretion disc. We find that doublebreak models fit the average power spectra well, which suggests that the source has at least two different accretion flow components dominating the overall flow. From the power spectrum of frequency derivatives, we measure a power-law index of approximately - 1, which implies that, on short time-scales, disc accretion dominates over noise, while on time-scales longer than the viscous time-scales, the noise dominates. From pulse profiles, we find a correlation between the pulse fraction and the count rate of the source.

  14. The stellar populations of M 33

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den bergh, S.

    1991-07-01

    A review is given of present ideas on the evolution and stellar content of the Triangulum nebula = M 33 = NGC 598. The disk of M 33 is embedded in a halo of globular clusters, metal-poor red giants, and RR Lyrae stars. Its nuclear bulge component is weak, suggesting that the halos of galaxies are not extensions of their bulges to large radii. The ages of M 33 clusters do not appear to exhibit a hiatus in their star-forming history like that which is observed in the Large Magellanic Cloud (LMC). Young and intermediate-age clusters with luminosities rivaling themore » populous clusters in the LMC are rare in M 33. The integrated light of the semistellar nucleus of M 33, which contains the strongest X-ray source in the Local Group, is dominated by a young metal-rich population. At optical wavelengths the disk scale length of M 33 is 9.6 arcmin, which is similar to the 9.9 arcmin scale length of OB associations. The ratio of the nova rate in M 33 to that in M 31 is approximately equal to the ratio of their luminosities. This suggests that the nova rate in a galaxy is not determined entirely by the integrated luminosity of old bulge stars. The gas-depletion time scale in the central region of M 33 is found to be about 1.7 {times} 10 to the 9th yr, which is significantly shorter than a Hubble time. 141 refs.« less

  15. Time-localized wavelet multiple regression and correlation

    NASA Astrophysics Data System (ADS)

    Fernández-Macho, Javier

    2018-02-01

    This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.

  16. Simulating mesoscale coastal evolution for decadal coastal management: A new framework integrating multiple, complementary modelling approaches

    NASA Astrophysics Data System (ADS)

    van Maanen, Barend; Nicholls, Robert J.; French, Jon R.; Barkwith, Andrew; Bonaldo, Davide; Burningham, Helene; Brad Murray, A.; Payo, Andres; Sutherland, James; Thornhill, Gillian; Townend, Ian H.; van der Wegen, Mick; Walkden, Mike J. A.

    2016-03-01

    Coastal and shoreline management increasingly needs to consider morphological change occurring at decadal to centennial timescales, especially that related to climate change and sea-level rise. This requires the development of morphological models operating at a mesoscale, defined by time and length scales of the order 101 to 102 years and 101 to 102 km. So-called 'reduced complexity' models that represent critical processes at scales not much smaller than the primary scale of interest, and are regulated by capturing the critical feedbacks that govern landform behaviour, are proving effective as a means of exploring emergent coastal behaviour at a landscape scale. Such models tend to be computationally efficient and are thus easily applied within a probabilistic framework. At the same time, reductionist models, built upon a more detailed description of hydrodynamic and sediment transport processes, are capable of application at increasingly broad spatial and temporal scales. More qualitative modelling approaches are also emerging that can guide the development and deployment of quantitative models, and these can be supplemented by varied data-driven modelling approaches that can achieve new explanatory insights from observational datasets. Such disparate approaches have hitherto been pursued largely in isolation by mutually exclusive modelling communities. Brought together, they have the potential to facilitate a step change in our ability to simulate the evolution of coastal morphology at scales that are most relevant to managing erosion and flood risk. Here, we advocate and outline a new integrated modelling framework that deploys coupled mesoscale reduced complexity models, reductionist coastal area models, data-driven approaches, and qualitative conceptual models. Integration of these heterogeneous approaches gives rise to model compositions that can potentially resolve decadal- to centennial-scale behaviour of diverse coupled open coast, estuary and inner shelf settings. This vision is illustrated through an idealised composition of models for a ~ 70 km stretch of the Suffolk coast, eastern England. A key advantage of model linking is that it allows a wide range of real-world situations to be simulated from a small set of model components. However, this process involves more than just the development of software that allows for flexible model coupling. The compatibility of radically different modelling assumptions remains to be carefully assessed and testing as well as evaluating uncertainties of models in composition are areas that require further attention.

  17. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    NASA Astrophysics Data System (ADS)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons are made between the observed and computed morphological behaviour for the Sand Engine on an aggregated as well as sub-system level.

  18. An integral equation formulation for rigid bodies in Stokes flow in three dimensions

    NASA Astrophysics Data System (ADS)

    Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan

    2017-03-01

    We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.

  19. Effects of real time control of sewer systems on treatment plant performance and receiving water quality.

    PubMed

    Frehmann, T; Niemann, A; Ustohal, P; Geiger, W F

    2002-01-01

    Four individual mathematical submodels simulating different subsystems of urban drainage were intercoupled to an integral model. The submodels (for surface runoff, flow in sewer system, wastewater treatment plant and receiving water) were calibrated on the basis of field data measured in an existing urban catchment investigation. Three different strategies for controlling the discharge in the sewer network were defined and implemented in the integral model. The impact of these control measures was quantified by representative immission state-parameters of the receiving water. The results reveal that the effect of a control measure may be ambivalent, depending on the referred component of a complex drainage system. Furthermore, it is demonstrated that the drainage system in the catchment investigation can be considerably optimised towards environmental protection and operation efficiency if an appropriate real time control on the integral scale is applied.

  20. On coarse projective integration for atomic deposition in amorphous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov

    2015-10-07

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  1. Community Integration and Quality of Life in Aphasia after Stroke.

    PubMed

    Lee, Hyejin; Lee, Yuna; Choi, Hyunsoo; Pyun, Sung-Bom

    2015-11-01

    To examine community integration and contributing factors in people with aphasia (PWA) following stroke and to investigate the relationship between community integration and quality of life (QOL). Thirty PWA and 42 age-and education-matched control subjects were involved. Main variables were as follows: socioeconomic status, mobility, and activity of daily living (ADL) (Modified Barthel Index), language function [Frenchay Aphasia Screening Test (FAST)], depression [Geriatric Depression Scale (GDS)], Community Integration Questionnaire (CIQ) and Stroke and Aphasia Quality of Life Scale-39 (SAQOL-39). Differences between aphasia and control groups and factors affecting community integration and QOL were analyzed. Home and social integration and productive activity were significantly decreased in the aphasia group compared to the control group; 8.5 and 18.3 points in total CIQ score, respectively. Amount of time spent outside the home and frequency of social contact were also significantly reduced in the aphasia group. Total mean score on the SAQOL-39 was 2.75±0.80 points and was significantly correlated with economic status, gait performance, ADL, depressive mood, and social domain score on the CIQ. Depression score measured by GDS was the single most important factor for the prediction of QOL, but the FAST score was significantly correlated only with the communication domain of the SAQOL-39. Community activities of PWA were very limited, and depression was highly associated with decreased community integration and QOL. Enhancing social participation and reducing emotional distress should be emphasized for rehabilitation of PWA.

  2. Kalman-Predictive-Proportional-Integral-Derivative (KPPID) Temperature Control

    NASA Astrophysics Data System (ADS)

    Fluerasu, Andrei; Sutton, Mark

    2003-09-01

    With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments [1] on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.

  3. Tornado outbreak variability follows Taylor's power law of fluctuation scaling and increases dramatically with severity.

    PubMed

    Tippett, Michael K; Cohen, Joel E

    2016-02-29

    Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from 'outbreaks' consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954-2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor's power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.

  4. Tornado outbreak variability follows Taylor's power law of fluctuation scaling and increases dramatically with severity

    PubMed Central

    Tippett, Michael K.; Cohen, Joel E.

    2016-01-01

    Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from ‘outbreaks' consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954–2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor's power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related. PMID:26923210

  5. Tornado outbreak variability follows Taylor's power law of fluctuation scaling and increases dramatically with severity

    NASA Astrophysics Data System (ADS)

    Tippett, Michael K.; Cohen, Joel E.

    2016-02-01

    Tornadoes cause loss of life and damage to property each year in the United States and around the world. The largest impacts come from `outbreaks' consisting of multiple tornadoes closely spaced in time. Here we find an upward trend in the annual mean number of tornadoes per US tornado outbreak for the period 1954-2014. Moreover, the variance of this quantity is increasing more than four times as fast as the mean. The mean and variance of the number of tornadoes per outbreak vary according to Taylor's power law of fluctuation scaling (TL), with parameters that are consistent with multiplicative growth. Tornado-related atmospheric proxies show similar power-law scaling and multiplicative growth. Path-length-integrated tornado outbreak intensity also follows TL, but with parameters consistent with sampling variability. The observed TL power-law scaling of outbreak severity means that extreme outbreaks are more frequent than would be expected if mean and variance were independent or linearly related.

  6. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Roehl, U.; Westerhold, T.; Laskar, J.

    2012-12-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  7. Time scale controversy: Accurate orbital calibration of the early Paleogene

    NASA Astrophysics Data System (ADS)

    Westerhold, Thomas; RöHl, Ursula; Laskar, Jacques

    2012-06-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to ˜54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  8. Ideas for Future GPS Timing Improvements

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    Having recently met stringent criteria for full operational capability (FOC) certification, the Global Positioning System (GPS) now has higher customer expectations than ever before. In order to maintain customer satisfaction, and the meet the even high customer demands of the future, the GPS Master Control Station (MCS) must play a critical role in the process of carefully refining the performance and integrity of the GPS constellation, particularly in the area of timing. This paper will present an operational perspective on several ideas for improving timing in GPS. These ideas include the desire for improving MCS - US Naval Observatory (USNO) data connectivity, an improved GPS-Coordinated Universal Time (UTC) prediction algorithm, a more robust Kalman Filter, and more features in the GPS reference time algorithm (the GPS composite clock), including frequency step resolution, a more explicit use of the basic time scale equation, and dynamic clock weighting. Current MCS software meets the exceptional challenge of managing an extremely complex constellation of 24 navigation satellites. The GPS community will, however, always seek to improve upon this performance and integrity.

  9. During running in place, grid cells integrate elapsed time and distance run

    PubMed Central

    Kraus, Benjamin J.; Brandon, Mark P.; Robinson, Robert J.; Connerney, Michael A.; Hasselmo, Michael E.; Eichenbaum, Howard

    2015-01-01

    Summary The spatial scale of grid cells may be provided by self-generated motion information or by external sensory information from environmental cues. To determine whether grid cell activity reflects distance traveled or elapsed time independent of external information, we recorded grid cells as animals ran in place on a treadmill. Grid cell activity was only weakly influenced by location but most grid cells and other neurons recorded from the same electrodes strongly signaled a combination of distance and time, with some signaling only distance or time. Grid cells were more sharply tuned to time and distance than non-grid cells. Many grid cells exhibited multiple firing fields during treadmill running, parallel to the periodic firing fields observed in open fields, suggesting a common mode of information processing. These observations indicate that, in the absence of external dynamic cues, grid cells integrate self-generated distance and time information to encode a representation of experience. PMID:26539893

  10. Using Citizen Science Observations to Model Species Distributions Over Space, Through Time, and Across Scales

    NASA Astrophysics Data System (ADS)

    Kelling, S.

    2017-12-01

    The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.

  11. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    PubMed

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  12. Integrating hydrologic and geophysical data to constrain coastal surficial aquifer processes at multiple spatial and temporal scales

    USGS Publications Warehouse

    Schultz, Gregory M.; Ruppel, Carolyn; Fulton, Patrick; Hyndman, David W.; Day-Lewis, Frederick D.; Singha, Kamini

    2007-01-01

    Since 1997, repeated, coincident geophysical surveys and extensive hydrologic studies in shallow monitoring wells have been used to study static and dynamic processes associated with surface water-groundwater interaction at a range of spatial scales at the estuarine and ocean boundaries of an undeveloped, permeable barrier island in the Georgia part of the U.S. South Atlantic Bight. Because geophysical and hydrologic data measure different parameters, at different resolution and precision, and over vastly different spatial scales, reconciling the coincident data or even combining complementary inversion, hydrogeochemcial analyses and well-based groundwater monitoring, and, in some cases, limited vegetation mapping to demonstrate the utility of an integrative, multidisciplinary approach for elucidating groundwater processes at spatial scales (tens to thousands of meters) that are often difficult to capture with traditional hydrologic approaches. The case studies highlight regional aquifer characteristics, varying degrees of lateral saltwater intrusion at estuarine boundaries, complex subsurface salinity gradients at the ocean boundary, and imaging of submarsh groundwater discharge and possible free convection in the pore waters of a clastic marsh. This study also documents the use of geophysical techniques for detecting temporal changes in groundwater salinity regimes under natural (not forced) gradients at intratidal to interannual (1998-200 Southeastern U.S.A. drought) time scales.

  13. Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation

    USGS Publications Warehouse

    Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.

    2000-01-01

    Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.

  14. Multiscale information modelling for heart morphogenesis

    NASA Astrophysics Data System (ADS)

    Abdulla, T.; Imms, R.; Schleich, J. M.; Summers, R.

    2010-07-01

    Science is made feasible by the adoption of common systems of units. As research has become more data intensive, especially in the biomedical domain, it requires the adoption of a common system of information models, to make explicit the relationship between one set of data and another, regardless of format. This is being realised through the OBO Foundry to develop a suite of reference ontologies, and NCBO Bioportal to provide services to integrate biomedical resources and functionality to visualise and create mappings between ontology terms. Biomedical experts tend to be focused at one level of spatial scale, be it biochemistry, cell biology, or anatomy. Likewise, the ontologies they use tend to be focused at a particular level of scale. There is increasing interest in a multiscale systems approach, which attempts to integrate between different levels of scale to gain understanding of emergent effects. This is a return to physiological medicine with a computational emphasis, exemplified by the worldwide Physiome initiative, and the European Union funded Network of Excellence in the Virtual Physiological Human. However, little work has been done on how information modelling itself may be tailored to a multiscale systems approach. We demonstrate how this can be done for the complex process of heart morphogenesis, which requires multiscale understanding in both time and spatial domains. Such an effort enables the integration of multiscale metrology.

  15. Advanced Grid-Friendly Controls Demonstration for Utility-Scale

    Science.gov Websites

    PV power plant in CAISO's footprint. NREL, CAISO, and First Solar conducted demonstration tests that vendors, integrators, and utilities to develop and evaluate photovoltaic (PV) power plants with advanced grid-friendly capabilities. Graph of power over time that shows a PV plant varying output to follow an

  16. A CMOS VLSI IC for Real-Time Opto-Electronic Two-Dimensional Histogram Generation

    DTIC Science & Technology

    1993-12-01

    large scale integration) design; MAGIC ; CMOS; optics; image processing; 93 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATiON 19...1. Sun SPARCstation ............. .............. 6 2. Magic .................. ................... 6 a. Peg ................. .................. 7 b...38 v APPENDIX B. MAGIC CELL LAYOUTS .... ............ .. 39 APPENDIX C: SIMULATION DATA ....... ............. .. 56 A. FINITE STATE MACHINE

  17. Fast Time Domain Integral Equation Solvers for Large-Scale Electromagnetic Analysis

    DTIC Science & Technology

    2004-10-01

    this topic coauthored by Mingyu Lu and Eric Michielssen received the Best Student Paper award at the 2001 IEEE Antennas and Propagation International...Yu Zhong, current Ph.D. student at UIUC. 18. Yujia Li, current Ph.D. student at UIUC. 19. Mingyu Lu, current Postdoctoral Fellow at UIUC. 20

  18. Integrating remote sensing, GIS and dynamic models for landscape-level simulation of forest insect disturbance

    USDA-ARS?s Scientific Manuscript database

    Cellular automata (CA) is a powerful tool in modeling the evolution of macroscopic scale phenomena as it couples time, space, and variable together while remaining in a simplified form. However, such application has remained challenging in landscape-level chronic forest insect epidemics due to the h...

  19. Towards a Critical Theory of Educational Technology

    ERIC Educational Resources Information Center

    Okan, Zuhal

    2007-01-01

    The purpose of this study is to offer a critical consideration of current initiatives, and common sense discourses, forcing educators to adopt and integrate educational technology on a large scale. This study argues that it is time, in the relative absence of a critical debate, to ask questions that should precede a wholesale adoption of…

  20. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    ERIC Educational Resources Information Center

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  1. Coalescence computations for large samples drawn from populations of time-varying sizes

    PubMed Central

    Polanski, Andrzej; Szczesna, Agnieszka; Garbulowski, Mateusz; Kimmel, Marek

    2017-01-01

    We present new results concerning probability distributions of times in the coalescence tree and expected allele frequencies for coalescent with large sample size. The obtained results are based on computational methodologies, which involve combining coalescence time scale changes with techniques of integral transformations and using analytical formulae for infinite products. We show applications of the proposed methodologies for computing probability distributions of times in the coalescence tree and their limits, for evaluation of accuracy of approximate expressions for times in the coalescence tree and expected allele frequencies, and for analysis of large human mitochondrial DNA dataset. PMID:28170404

  2. Turbulent transport measurements with a laser Doppler velocimeter

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectrum is obtained in terms of the space time correlation function of the fluid. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time average spectrum contains information only for times shorter than the Lagrangian integral time scale of the turbulence. To examine the long time behavior, one must use either extremely small scattering angles, much longer wavelength radiation or a different mode of signal analysis, e.g., FM detection.

  3. Ultrafast Three-Dimensional Integrated Imaging of Strain in Core/Shell Semiconductor/Metal Nanostructures

    DOE PAGES

    Cherukara, Mathew J.; Sasikumar, Kiran; DiChiara, Anthony; ...

    2017-11-07

    Visualizing the dynamical response of material heterointerfaces is increasingly important for the design of hybrid materials and structures with tailored properties for use in functional devices. In situ characterization of nanoscale heterointerfaces such as metal-semiconductor interfaces, which exhibit a complex interplay between lattice strain, electric potential, and heat transport at subnanosecond time scales, is particularly challenging. Here in this work, we use a laser pump/X-ray probe form of Bragg coherent diffraction imaging (BCDI) to visualize in three-dimension the deformation of the core of a model core/shell semiconductor-metal (ZnO/Ni) nanorod following laser heating of the shell. We observe a rich interplaymore » of radial, axial, and shear deformation modes acting at different time scales that are induced by the strain from the Ni shell. We construct experimentally informed models by directly importing the reconstructed crystal from the ultrafast experiment into a thermo-electromechanical continuum model. The model elucidates the origin of the deformation modes observed experimentally. Our integrated imaging approach represents an invaluable tool to probe strain dynamics across mixed interfaces under operando conditions.« less

  4. The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.

    2006-12-01

    Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.

  5. Ultrafast Three-Dimensional Integrated Imaging of Strain in Core/Shell Semiconductor/Metal Nanostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherukara, Mathew J.; Sasikumar, Kiran; DiChiara, Anthony

    Visualizing the dynamical response of material heterointerfaces is increasingly important for the design of hybrid materials and structures with tailored properties for use in functional devices. In situ characterization of nanoscale heterointerfaces such as metal-semiconductor interfaces, which exhibit a complex interplay between lattice strain, electric potential, and heat transport at subnanosecond time scales, is particularly challenging. Here in this work, we use a laser pump/X-ray probe form of Bragg coherent diffraction imaging (BCDI) to visualize in three-dimension the deformation of the core of a model core/shell semiconductor-metal (ZnO/Ni) nanorod following laser heating of the shell. We observe a rich interplaymore » of radial, axial, and shear deformation modes acting at different time scales that are induced by the strain from the Ni shell. We construct experimentally informed models by directly importing the reconstructed crystal from the ultrafast experiment into a thermo-electromechanical continuum model. The model elucidates the origin of the deformation modes observed experimentally. Our integrated imaging approach represents an invaluable tool to probe strain dynamics across mixed interfaces under operando conditions.« less

  6. Ultrafast Three-Dimensional Integrated Imaging of Strain in Core/Shell Semiconductor/Metal Nanostructures.

    PubMed

    Cherukara, Mathew J; Sasikumar, Kiran; DiChiara, Anthony; Leake, Steven J; Cha, Wonsuk; Dufresne, Eric M; Peterka, Tom; McNulty, Ian; Walko, Donald A; Wen, Haidan; Sankaranarayanan, Subramanian K R S; Harder, Ross J

    2017-12-13

    Visualizing the dynamical response of material heterointerfaces is increasingly important for the design of hybrid materials and structures with tailored properties for use in functional devices. In situ characterization of nanoscale heterointerfaces such as metal-semiconductor interfaces, which exhibit a complex interplay between lattice strain, electric potential, and heat transport at subnanosecond time scales, is particularly challenging. In this work, we use a laser pump/X-ray probe form of Bragg coherent diffraction imaging (BCDI) to visualize in three-dimension the deformation of the core of a model core/shell semiconductor-metal (ZnO/Ni) nanorod following laser heating of the shell. We observe a rich interplay of radial, axial, and shear deformation modes acting at different time scales that are induced by the strain from the Ni shell. We construct experimentally informed models by directly importing the reconstructed crystal from the ultrafast experiment into a thermo-electromechanical continuum model. The model elucidates the origin of the deformation modes observed experimentally. Our integrated imaging approach represents an invaluable tool to probe strain dynamics across mixed interfaces under operando conditions.

  7. Nanowire-nanopore transistor sensor for DNA detection during translocation

    NASA Astrophysics Data System (ADS)

    Xie, Ping; Xiong, Qihua; Fang, Ying; Qing, Quan; Lieber, Charles

    2011-03-01

    Nanopore sequencing, as a promising low cost, high throughput sequencing technique, has been proposed more than a decade ago. Due to the incompatibility between small ionic current signal and fast translocation speed and the technical difficulties on large scale integration of nanopore for direct ionic current sequencing, alternative methods rely on integrated DNA sensors have been proposed, such as using capacitive coupling or tunnelling current etc. But none of them have been experimentally demonstrated yet. Here we show that for the first time an amplified sensor signal has been experimentally recorded from a nanowire-nanopore field effect transistor sensor during DNA translocation. Independent multi-channel recording was also demonstrated for the first time. Our results suggest that the signal is from highly localized potential change caused by DNA translocation in none-balanced buffer condition. Given this method may produce larger signal for smaller nanopores, we hope our experiment can be a starting point for a new generation of nanopore sequencing devices with larger signal, higher bandwidth and large-scale multiplexing capability and finally realize the ultimate goal of low cost high throughput sequencing.

  8. Photonic content-addressable memory system that uses a parallel-readout optical disk

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.

    1995-11-01

    We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.

  9. Climate Dynamics and Experimental Prediction (CDEP) and Regional Integrated Science Assessments (RISA) Programs at NOAA Office of Global Programs

    NASA Astrophysics Data System (ADS)

    Bamzai, A.

    2003-04-01

    This talk will highlight science and application activities of the CDEP and RISA programs at NOAA OGP. CDEP, through a set of Applied Research Centers (ARCs), supports NOAA's program of quantitative assessments and predictions of global climate variability and its regional implications on time scales of seasons to centuries. The RISA program consolidates results from ongoing disciplinary process research under an integrative framework. Examples of joint CDEP-RISA activities will be presented. Future directions and programmatic challenges will also be discussed.

  10. Extending the Shared Socioeconomic Pathways for sub-national impacts, adaptation, and vulnerability studies

    DOE PAGES

    Absar, Syeda Mariya; Preston, Benjamin L.

    2015-05-25

    The exploration of alternative socioeconomic futures is an important aspect of understanding the potential consequences of climate change. While socioeconomic scenarios are common and, at times essential, tools for the impact, adaptation and vulnerability and integrated assessment modeling research communities, their approaches to scenario development have historically been quite distinct. However, increasing convergence of impact, adaptation and vulnerability and integrated assessment modeling research in terms of scales of analysis suggests there may be value in the development of a common framework for socioeconomic scenarios. The Shared Socioeconomic Pathways represents an opportunity for the development of such a common framework. However,more » the scales at which these global storylines have been developed are largely incommensurate with the sub-national scales at which impact, adaptation and vulnerability, and increasingly integrated assessment modeling, studies are conducted. Our objective for this study was to develop sub-national and sectoral extensions of the global SSP storylines in order to identify future socioeconomic challenges for adaptation for the U.S. Southeast. A set of nested qualitative socioeconomic storyline elements, integrated storylines, and accompanying quantitative indicators were developed through an application of the Factor-Actor-Sector framework. Finally, in addition to revealing challenges and opportunities associated with the use of the SSPs as a basis for more refined scenario development, this study generated sub-national storyline elements and storylines that can subsequently be used to explore the implications of alternative subnational socioeconomic futures for the assessment of climate change impacts and adaptation.« less

  11. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).

  12. Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale

    NASA Astrophysics Data System (ADS)

    Sobolev, S. V.; Muldashev, I. A.

    2015-12-01

    Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.

  13. A spectral radius scaling semi-implicit iterative time stepping method for reactive flow simulations with detailed chemistry

    NASA Astrophysics Data System (ADS)

    Xie, Qing; Xiao, Zhixiang; Ren, Zhuyin

    2018-09-01

    A spectral radius scaling semi-implicit time stepping scheme has been developed for simulating unsteady compressible reactive flows with detailed chemistry, in which the spectral radius in the LUSGS scheme has been augmented to account for viscous/diffusive and reactive terms and a scalar matrix is proposed to approximate the chemical Jacobian using the minimum species destruction timescale. The performance of the semi-implicit scheme, together with a third-order explicit Runge-Kutta scheme and a Strang splitting scheme, have been investigated in auto-ignition and laminar premixed and nonpremixed flames of three representative fuels, e.g., hydrogen, methane, and n-heptane. Results show that the minimum species destruction time scale can well represent the smallest chemical time scale in reactive flows and the proposed scheme can significantly increase the allowable time steps in simulations. The scheme is stable when the time step is as large as 10 μs, which is about three to five orders of magnitude larger than the smallest time scales in various tests considered. For the test flames considered, the semi-implicit scheme achieves second order of accuracy in time. Moreover, the errors in quantities of interest are smaller than those from the Strang splitting scheme indicating the accuracy gain when the reaction and transport terms are solved coupled. Results also show that the relative efficiency of different schemes depends on fuel mechanisms and test flames. When the minimum time scale in reactive flows is governed by transport processes instead of chemical reactions, the proposed semi-implicit scheme is more efficient than the splitting scheme. Otherwise, the relative efficiency depends on the cost in sub-iterations for convergence within each time step and in the integration for chemistry substep. Then, the capability of the compressible reacting flow solver and the proposed semi-implicit scheme is demonstrated for capturing the hydrogen detonation waves. Finally, the performance of the proposed method is demonstrated in a two-dimensional hydrogen/air diffusion flame.

  14. Robotically Assembled Aerospace Structures: Digital Material Assembly using a Gantry-Type Assembler

    NASA Technical Reports Server (NTRS)

    Trinh, Greenfield; Copplestone, Grace; O'Connor, Molly; Hu, Steven; Nowak, Sebastian; Cheung, Kenneth; Jenett, Benjamin; Cellucci, Daniel

    2017-01-01

    This paper evaluates the development of automated assembly techniques for discrete lattice structures using a multi-axis gantry type CNC machine. These lattices are made of discrete components called digital materials. We present the development of a specialized end effector that works in conjunction with the CNC machine to assemble these lattices. With this configuration we are able to place voxels at a rate of 1.5 per minute. The scalability of digital material structures due to the incremental modular assembly is one of its key traits and an important metric of interest. We investigate the build times of a 5x5 beam structure on the scale of 1 meter (325 parts), 10 meters (3,250 parts), and 30 meters (9,750 parts). Utilizing the current configuration with a single end effector, performing serial assembly with a globally fixed feed station at the edge of the build volume, the build time increases according to a scaling law of n4, where n is the build scale. Build times can be reduced significantly by integrating feed systems into the gantry itself, resulting in a scaling law of n3. A completely serial assembly process will encounter time limitations as build scale increases. Automated assembly for digital materials can assemble high performance structures from discrete parts, and techniques such as built in feed systems, parallelization, and optimization of the fastening process will yield much higher throughput.

  15. Robotically Assembled Aerospace Structures: Digital Material Assembly using a Gantry-Type Assembler

    NASA Technical Reports Server (NTRS)

    Trinh, Greenfield; Copplestone, Grace; O'Connor, Molly; Hu, Steven; Nowak, Sebastian; Cheung, Kenneth; Jenett, Benjamin; Cellucci, Daniel

    2017-01-01

    This paper evaluates the development of automated assembly techniques for discrete lattice structures using a multi-axis gantry type CNC machine. These lattices are made of discrete components called "digital materials." We present the development of a specialized end effector that works in conjunction with the CNC machine to assemble these lattices. With this configuration we are able to place voxels at a rate of 1.5 per minute. The scalability of digital material structures due to the incremental modular assembly is one of its key traits and an important metric of interest. We investigate the build times of a 5x5 beam structure on the scale of 1 meter (325 parts), 10 meters (3,250 parts), and 30 meters (9,750 parts). Utilizing the current configuration with a single end effector, performing serial assembly with a globally fixed feed station at the edge of the build volume, the build time increases according to a scaling law of n4, where n is the build scale. Build times can be reduced significantly by integrating feed systems into the gantry itself, resulting in a scaling law of n3. A completely serial assembly process will encounter time limitations as build scale increases. Automated assembly for digital materials can assemble high performance structures from discrete parts, and techniques such as built in feed systems, parallelization, and optimization of the fastening process will yield much higher throughput.

  16. Introduction of a conceptual model for integrating the MMPI-2-RF into HCR-20V3 violence risk assessments and associations between the MMPI-2-RF and institutional violence.

    PubMed

    Tarescavage, Anthony M; Glassmire, David M; Burchett, Danielle

    2016-12-01

    Reflecting the need to prevent violence, structured professional judgment assessment tools have been developed specifically to assess the likelihood of future violence. These tools typically integrate data from clinical interviews and collateral records to assist in the conceptualization of violence risk, but objective psychological testing may also be useful in completing the instruments. The authors describe the advantages of using the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in this manner with the Historical Clinical Management-20 Version 3 (HCR-20 V3 ). Accordingly, they have 2 purposes. First, they sought to identify conceptual links between the constructs measured by the tools and introduce a model to integrate MMPI-2-RF findings into an HCR-20 V3 risk assessment. Second, although the authors did not have collateral HCR-20 V3 ratings, they sought to examine associations between the MMPI-2-RF scales and future violence in a sample of 303 psychiatric patients (233 males, 70 females) adjudicated as not guilty by reason of insanity. The authors found that the MMPI-2-RF scales demonstrated significant, meaningful associations with a count of future violent acts at the hospital. The largest associations involved scales measuring emotional dysregulation and externalizing dysfunction. These associations were qualified by relative risk ratio analyses indicating that patients producing elevations on these scales were at 1.5 to 2.5 times greater risk of future violence than those without elevations. Overall, the findings indicated that most MMPI-2-RF scales conceptually linked to the HCR-20 V3 risk factors were associated with future violence. In light of these findings, the authors discuss recommendations for integrating the MMPI-2-RF when interpreting HCR-20 V3 risk factors. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. A Multi-Scale Integrated Approach to Representing Watershed Systems: Significance and Challenges

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ivanov, V. Y.; Katopodes, N.

    2013-12-01

    A range of processes associated with supplying services and goods to human society originate at the watershed level. Predicting watershed response to forcing conditions has been of high interest to many practical societal problems, however, remains challenging due to two significant properties of the watershed systems, i.e., connectivity and non-linearity. Connectivity implies that disturbances arising at any larger scale will necessarily propagate and affect local-scale processes; their local effects consequently influence other processes, and often convey nonlinear relationships. Physically-based, process-scale modeling is needed to approach the understanding and proper assessment of non-linear effects between the watershed processes. We have developed an integrated model simulating hydrological processes, flow dynamics, erosion and sediment transport, tRIBS-OFM-HRM (Triangulated irregular network - based Real time Integrated Basin Simulator-Overland Flow Model-Hairsine and Rose Model). This coupled model offers the advantage of exploring the hydrological effects of watershed physical factors such as topography, vegetation, and soil, as well as their feedback mechanisms. Several examples investigating the effects of vegetation on flow movement, the role of soil's substrate on sediment dynamics, and the driving role of topography on morphological processes are illustrated. We show how this comprehensive modeling tool can help understand interconnections and nonlinearities of the physical system, e.g., how vegetation affects hydraulic resistance depending on slope, vegetation cover fraction, discharge, and bed roughness condition; how the soil's substrate condition impacts erosion processes with an non-unique characteristic at the scale of a zero-order catchment; and how topographic changes affect spatial variations of morphologic variables. Due to feedback and compensatory nature of mechanisms operating in different watershed compartments, our conclusion is that a key to representing watershed systems lies in an integrated, interdisciplinary approach, whereby a physically-based model is used for assessments/evaluations associated with future changes in landuse, climate, and ecosystems.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10{sup 7} cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrummore » of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on {sup 57}Fe.« less

  19. Application of Time Series Insar Technique for Deformation Monitoring of Large-Scale Landslides in Mountainous Areas of Western China

    NASA Astrophysics Data System (ADS)

    Qu, T.; Lu, P.; Liu, C.; Wan, H.

    2016-06-01

    Western China is very susceptible to landslide hazards. As a result, landslide detection and early warning are of great importance. This work employs the SBAS (Small Baseline Subset) InSAR Technique for detection and monitoring of large-scale landslides that occurred in Li County, Sichuan Province, Western China. The time series INSAR is performed using descending scenes acquired from TerraSAR-X StripMap mode since 2014 to get the spatial distribution of surface displacements of this giant landslide. The time series results identify the distinct deformation zone on the landslide body with a rate of up to 150mm/yr. The deformation acquired by SBAS technique is validated by inclinometers from diverse boreholes of in-situ monitoring. The integration of InSAR time series displacements and ground-based monitoring data helps to provide reliable data support for the forecasting and monitoring of largescale landslide.

  20. Ultrafast dynamics and stabilization in chip-scale optical frequency combs (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Huang, Shu Wei

    2017-02-01

    Optical frequency comb technology has been the cornerstone for scientific breakthroughs such as precision frequency metrology, re-definition of time, extreme light-matter interaction, and attosecond sciences. Recently emerged Kerr-active microresonators are promising alternatives to the current benchmark femtosecond laser platform. These chip-scale frequency combs, or Kerr combs, are unique in their compact footprints and offer the potential for monolithic electronic and feedback integration, thereby expanding the already remarkable applications of optical frequency combs. In this talk, I will first report the generation and characterization of low-phase-noise Kerr frequency combs. Measurements of the Kerr comb ultrafast dynamics and phase noise will be presented and discussed. Then I will describe novel strategies to fully stabilize Kerr comb line frequencies towards chip-scale optical frequency synthesizers with a relative uncertainty better than 2.7×10-16. I will show that the unique generation physics of Kerr frequency comb can provide an intrinsic self-referenced access to the Kerr comb line frequencies. The strategy improves the optical frequency stability by more than two orders of magnitude, while preserving the Kerr comb's key advantage of low SWaP and potential for chip-scale electronic and photonic integration.

  1. Sequencing Data Discovery and Integration for Earth System Science with MetaSeek

    NASA Astrophysics Data System (ADS)

    Hoarfrost, A.; Brown, N.; Arnosti, C.

    2017-12-01

    Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.

  2. Higher-order phase transitions on financial markets

    NASA Astrophysics Data System (ADS)

    Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.

    2010-08-01

    Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.

  3. Age-related alterations in the fractal scaling of cardiac interbeat interval dynamics

    NASA Technical Reports Server (NTRS)

    Iyengar, N.; Peng, C. K.; Morin, R.; Goldberger, A. L.; Lipsitz, L. A.

    1996-01-01

    We postulated that aging is associated with disruption in the fractallike long-range correlations that characterize healthy sinus rhythm cardiac interval dynamics. Ten young (21-34 yr) and 10 elderly (68-81 yr) rigorously screened healthy subjects underwent 120 min of continuous supine resting electrocardiographic recording. We analyzed the interbeat interval time series using standard time and frequency domain statistics and using a fractal measure, detrended fluctuation analysis, to quantify long-range correlation properties. In healthy young subjects, interbeat intervals demonstrated fractal scaling, with scaling exponents (alpha) from the fluctuation analysis close to a value of 1.0. In the group of healthy elderly subjects, the interbeat interval time series had two scaling regions. Over the short range, interbeat interval fluctuations resembled a random walk process (Brownian noise, alpha = 1.5), whereas over the longer range they resembled white noise (alpha = 0.5). Short (alpha s)- and long-range (alpha 1) scaling exponents were significantly different in the elderly subjects compared with young (alpha s = 1.12 +/- 0.19 vs. 0.90 +/- 0.14, respectively, P = 0.009; alpha 1 = 0.75 +/- 0.17 vs. 0.99 +/- 0.10, respectively, P = 0.002). The crossover behavior from one scaling region to another could be modeled as a first-order autoregressive process, which closely fit the data from four elderly subjects. This implies that a single characteristic time scale may be dominating heartbeat control in these subjects. The age-related loss of fractal organization in heartbeat dynamics may reflect the degradation of integrated physiological regulatory systems and may impair an individual's ability to adapt to stress.

  4. Intercalibration of radioisotopic and astrochronologic time scales for the Cenomanian-Turonian boundary interval, western interior Basin, USA

    USGS Publications Warehouse

    Meyers, S.R.; Siewert, S.E.; Singer, B.S.; Sageman, B.B.; Condon, D.J.; Obradovich, J.D.; Jicha, B.R.; Sawyer, D.A.

    2012-01-01

    We develop an intercalibrated astrochronologic and radioisotopic time scale for the Cenomanian-Turonian boundary (CTB) interval near the Global Stratotype Section and Point in Colorado, USA, where orbitally influenced rhythmic strata host bentonites that contain sanidine and zircon suitable for 40Ar/ 39Ar and U-Pb dating. Paired 40Ar/ 39Ar and U-Pb ages are determined from four bentonites that span the Vascoceras diartianum to Pseudaspidoceras flexuosum ammonite biozones, utilizing both newly collected material and legacy sanidine samples of J. Obradovich. Comparison of the 40Ar/ 39Ar and U-Pb results underscores the strengths and limitations of each system, and supports an astronomically calibrated Fish Canyon sanidine standard age of 28.201 Ma. The radioisotopic data and published astrochronology are employed to develop a new CTB time scale, using two statistical approaches: (1) a simple integration that yields a CTB age of 93.89 ?? 0.14 Ma (2??; total radioisotopic uncertainty), and (2) a Bayesian intercalibration that explicitly accounts for orbital time scale uncertainty, and yields a CTB age of 93.90 ?? 0.15 Ma (95% credible interval; total radioisotopic and orbital time scale uncertainty). Both approaches firmly anchor the floating orbital time scale, and the Bayesian technique yields astronomically recalibrated radioisotopic ages for individual bentonites, with analytical uncertainties at the permil level of resolution, and total uncertainties below 2???. Using our new results, the duration between the Cenomanian-Turonian and the Cretaceous-Paleogene boundaries is 27.94 ?? 0.16 Ma, with an uncertainty of less than one-half of a long eccentricity cycle. ?? 2012 Geological Society of America.

  5. Dimensionality and integrals of motion of the Trappist-1 planetary system

    NASA Astrophysics Data System (ADS)

    Floß, Johannes; Rein, Hanno; Brumer, Paul

    2018-04-01

    The number of isolating integrals of motion of the Trappist-1 system - a late M-dwarf orbited by seven Earth-sized planets - was determined numerically, using an adapted version of the correlation dimension method. It was found that over the investigated time-scales of up to 20 000 years the number of isolating integrals of motion is the same as one would find for a system of seven non-interacting planets - despite the fact that the planets in the Trappist-1 system are strongly interacting. Considering perturbed versions of the Trappist-1 system shows that the system may occupy an atypical part of phase-space with high stability. These findings are consistent with earlier studies.

  6. Dimensionality and integrals of motion of the Trappist-1 planetary system

    NASA Astrophysics Data System (ADS)

    Floß, Johannes; Rein, Hanno; Brumer, Paul

    2018-07-01

    The number of isolating integrals of motion of the Trappist-1 system - a late M-dwarf orbited by seven Earth-sized planets - was determined numerically, using an adapted version of the correlation dimension method. It was found that over the investigated time-scales of up to 20 000 yr the number of isolating integrals of motion is the same as one would find for a system of seven non-interacting planets - despite the fact that the planets in the Trappist-1 system are strongly interacting. Considering perturbed versions of the Trappist-1 system shows that the system may occupy an atypical part of phase-space with high stability. These findings are consistent with earlier studies.

  7. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  8. GCR Modulation by Small-Scale Features in the Interplanetary Medium

    NASA Astrophysics Data System (ADS)

    Jordan, A. P.; Spence, H. E.; Blake, J. B.; Mulligan, T. L.; Shaul, D. N.; Galametz, M.

    2007-12-01

    In an effort to uncover the properties of structures in the interplanetary medium (IPM) that modulate galactic cosmic rays (GCR) on short time-scales (from hours to days), we study periods of differing conditions in the IPM. We analyze GCR variations from spacecraft both inside and outside the magnetosphere, using the High Sensitivity Telescope (HIST) on Polar and the Spectrometer for INTEGRAL (SPI). We seek causal correlations between the observed GCR modulations and structures in the solar wind plasma and interplanetary magnetic field, as measured concurrently with ACE and/or Wind. Our analysis spans time-/size-scale variations ranging from classic Forbush decreases (Fds), to substructure embedded within Fds, to much smaller amplitude and shorter duration variations observed during comparatively benign interplanetary conditions. We compare and contrast the conditions leading to the range of different GCR responses to modulating structures in the IPM.

  9. Materials integrity in microsystems: a framework for a petascale predictive-science-based multiscale modeling and simulation system

    NASA Astrophysics Data System (ADS)

    To, Albert C.; Liu, Wing Kam; Olson, Gregory B.; Belytschko, Ted; Chen, Wei; Shephard, Mark S.; Chung, Yip-Wah; Ghanem, Roger; Voorhees, Peter W.; Seidman, David N.; Wolverton, Chris; Chen, J. S.; Moran, Brian; Freeman, Arthur J.; Tian, Rong; Luo, Xiaojuan; Lautenschlager, Eric; Challoner, A. Dorian

    2008-09-01

    Microsystems have become an integral part of our lives and can be found in homeland security, medical science, aerospace applications and beyond. Many critical microsystem applications are in harsh environments, in which long-term reliability needs to be guaranteed and repair is not feasible. For example, gyroscope microsystems on satellites need to function for over 20 years under severe radiation, thermal cycling, and shock loading. Hence a predictive-science-based, verified and validated computational models and algorithms to predict the performance and materials integrity of microsystems in these situations is needed. Confidence in these predictions is improved by quantifying uncertainties and approximation errors. With no full system testing and limited sub-system testings, petascale computing is certainly necessary to span both time and space scales and to reduce the uncertainty in the prediction of long-term reliability. This paper presents the necessary steps to develop predictive-science-based multiscale modeling and simulation system. The development of this system will be focused on the prediction of the long-term performance of a gyroscope microsystem. The environmental effects to be considered include radiation, thermo-mechanical cycling and shock. Since there will be many material performance issues, attention is restricted to creep resulting from thermal aging and radiation-enhanced mass diffusion, material instability due to radiation and thermo-mechanical cycling and damage and fracture due to shock. To meet these challenges, we aim to develop an integrated multiscale software analysis system that spans the length scales from the atomistic scale to the scale of the device. The proposed software system will include molecular mechanics, phase field evolution, micromechanics and continuum mechanics software, and the state-of-the-art model identification strategies where atomistic properties are calibrated by quantum calculations. We aim to predict the long-term (in excess of 20 years) integrity of the resonator, electrode base, multilayer metallic bonding pads, and vacuum seals in a prescribed mission. Although multiscale simulations are efficient in the sense that they focus the most computationally intensive models and methods on only the portions of the space time domain needed, the execution of the multiscale simulations associated with evaluating materials and device integrity for aerospace microsystems will require the application of petascale computing. A component-based software strategy will be used in the development of our massively parallel multiscale simulation system. This approach will allow us to take full advantage of existing single scale modeling components. An extensive, pervasive thrust in the software system development is verification, validation, and uncertainty quantification (UQ). Each component and the integrated software system need to be carefully verified. An UQ methodology that determines the quality of predictive information available from experimental measurements and packages the information in a form suitable for UQ at various scales needs to be developed. Experiments to validate the model at the nanoscale, microscale, and macroscale are proposed. The development of a petascale predictive-science-based multiscale modeling and simulation system will advance the field of predictive multiscale science so that it can be used to reliably analyze problems of unprecedented complexity, where limited testing resources can be adequately replaced by petascale computational power, advanced verification, validation, and UQ methodologies.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dahlburg, Jill; Corones, James; Batchelor, Donald

    Fusion is potentially an inexhaustible energy source whose exploitation requires a basic understanding of high-temperature plasmas. The development of a science-based predictive capability for fusion-relevant plasmas is a challenge central to fusion energy science, in which numerical modeling has played a vital role for more than four decades. A combination of the very wide range in temporal and spatial scales, extreme anisotropy, the importance of geometric detail, and the requirement of causality which makes it impossible to parallelize over time, makes this problem one of the most challenging in computational physics. Sophisticated computational models are under development for many individualmore » features of magnetically confined plasmas and increases in the scope and reliability of feasible simulations have been enabled by increased scientific understanding and improvements in computer technology. However, full predictive modeling of fusion plasmas will require qualitative improvements and innovations to enable cross coupling of a wider variety of physical processes and to allow solution over a larger range of space and time scales. The exponential growth of computer speed, coupled with the high cost of large-scale experimental facilities, makes an integrated fusion simulation initiative a timely and cost-effective opportunity. Worldwide progress in laboratory fusion experiments provides the basis for a recent FESAC recommendation to proceed with a burning plasma experiment (see FESAC Review of Burning Plasma Physics Report, September 2001). Such an experiment, at the frontier of the physics of complex systems, would be a huge step in establishing the potential of magnetic fusion energy to contribute to the world’s energy security. An integrated simulation capability would dramatically enhance the utilization of such a facility and lead to optimization of toroidal fusion plasmas in general. This science-based predictive capability, which was cited in the FESAC integrated planning document (IPPA, 2000), represents a significant opportunity for the DOE Office of Science to further the understanding of fusion plasmas to a level unparalleled worldwide.« less

  11. Understanding the source of multifractality in financial markets

    NASA Astrophysics Data System (ADS)

    Barunik, Jozef; Aste, Tomaso; Di Matteo, T.; Liu, Ruipeng

    2012-09-01

    In this paper, we use the generalized Hurst exponent approach to study the multi-scaling behavior of different financial time series. We show that this approach is robust and powerful in detecting different types of multi-scaling. We observe a puzzling phenomenon where an apparent increase in multifractality is measured in time series generated from shuffled returns, where all time-correlations are destroyed, while the return distributions are conserved. This effect is robust and it is reproduced in several real financial data including stock market indices, exchange rates and interest rates. In order to understand the origin of this effect we investigate different simulated time series by means of the Markov switching multifractal model, autoregressive fractionally integrated moving average processes with stable innovations, fractional Brownian motion and Levy flights. Overall we conclude that the multifractality observed in financial time series is mainly a consequence of the characteristic fat-tailed distribution of the returns and time-correlations have the effect to decrease the measured multifractality.

  12. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2006-11-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  13. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2010-06-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  14. Marine Research Infrastructure collaboration in the COOPLUS project framework - Promoting synergies for marine ecosystems studies

    NASA Astrophysics Data System (ADS)

    Beranzoli, L.; Best, M.; Embriaco, D.; Favali, P.; Juniper, K.; Lo Bue, N.; Lara-Lopez, A.; Materia, P.; Ó Conchubhair, D.; O'Rourke, E.; Proctor, R.; Weller, R. A.

    2017-12-01

    Understanding effects on marine ecosystems of multiple drivers at various scales; from regional such as climate and ocean circulation, to local, such as seafloor gas emissions and harmful underwater noise, requires long time-series of integrated and standardised datasets. Large-scale research infrastructures for ocean observation are able to provide such time-series for a variety of ocean process physical parameters (mass and energy exchanges among surface, water column and benthic boundary layer) that constitute important and necessary measures of environmental conditions and change/development over time. Information deduced from these data is essential for the study, modelling and prediction of marine ecosystems changes and can reveal and potentially confirm deterioration and threats. The COOPLUS European Commission project brings together research infrastructures with the aim of coordinating multilateral cooperation among RIs and identifying common priorities, actions, instruments, resources. COOPLUS will produce a Strategic Research and Innovation Agenda (SRIA) which will be a shared roadmap for mid to long-term collaboration. In particular, marine RIs collaborating in COOPLUS, namely the European Multidisciplinary Seafloor and water column Observatory: EMSO (Europe), the Ocean Observatories Initiative (OOI, USA), Ocean Networks Canada (ONC), and the Integrated Marine Observing System (IMOS, Australia), can represent a source of important data for researchers of marine ecosystems. The RIs can then, in turn, receive suggestions from researchers for implementing new measurements and stimulating cross-cutting collaborations and data integration and standardisation from their user community. This poster provides a description of EMSO, OOI, ONC and IMOS for the benefit of marine ecosystem studies and presents examples of where the analyses of time-series have revealed noteworthy environmental conditions, temporal trends and events.

  15. NREL’s Controllable Grid Interface Saves Time and Resources, Improves Reliability of Renewable Energy Technologies; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The National Renewable Energy Laboratory's (NREL) controllable grid interface (CGI) test system at the National Wind Technology Center (NWTC) is one of two user facilities at NREL capable of testing and analyzing the integration of megawatt-scale renewable energy systems. The CGI specializes in testing of multimegawatt-scale wind and photovoltaic (PV) technologies as well as energy storage devices, transformers, control and protection equipment at medium-voltage levels, allowing the determination of the grid impacts of the tested technology.

  16. Realizing the electric-vehicle revolution

    NASA Astrophysics Data System (ADS)

    Tran, Martino; Banister, David; Bishop, Justin D. K.; McCulloch, Malcolm D.

    2012-05-01

    Full battery electric vehicles (BEVs) have become an important policy option to mitigate climate change, but there are major uncertainties in the scale and timing of market diffusion. Although there has been substantial work showing the potential energy and climate benefits of BEVs, demand-side factors, such as consumer behaviour, are less recognized in the debate. We show the importance of assessing BEV diffusion from an integrated perspective, focusing on key interactions between technology and behaviour across different scales, including power-system demand, charging infrastructure, vehicle performance, driving patterns and individual adoption behaviour.

  17. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    PubMed

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  18. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  19. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  20. Micron-scale mapping of megagauss magnetic fields using optical polarimetry to probe hot electron transport in petawatt-class laser-solid interactions.

    PubMed

    Chatterjee, Gourab; Singh, Prashant Kumar; Robinson, A P L; Blackman, D; Booth, N; Culfa, O; Dance, R J; Gizzi, L A; Gray, R J; Green, J S; Koester, P; Kumar, G Ravindra; Labate, L; Lad, Amit D; Lancaster, K L; Pasley, J; Woolsey, N C; Rajeev, P P

    2017-08-21

    The transport of hot, relativistic electrons produced by the interaction of an intense petawatt laser pulse with a solid has garnered interest due to its potential application in the development of innovative x-ray sources and ion-acceleration schemes. We report on spatially and temporally resolved measurements of megagauss magnetic fields at the rear of a 50-μm thick plastic target, irradiated by a multi-picosecond petawatt laser pulse at an incident intensity of ~10 20 W/cm 2 . The pump-probe polarimetric measurements with micron-scale spatial resolution reveal the dynamics of the magnetic fields generated by the hot electron distribution at the target rear. An annular magnetic field profile was observed ~5 ps after the interaction, indicating a relatively smooth hot electron distribution at the rear-side of the plastic target. This is contrary to previous time-integrated measurements, which infer that such targets will produce highly structured hot electron transport. We measured large-scale filamentation of the hot electron distribution at the target rear only at later time-scales of ~10 ps, resulting in a commensurate large-scale filamentation of the magnetic field profile. Three-dimensional hybrid simulations corroborate our experimental observations and demonstrate a beam-like hot electron transport at initial time-scales that may be attributed to the local resistivity profile at the target rear.

  1. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  2. Passive advection of a vector field: Anisotropy, finite correlation time, exact solution, and logarithmic corrections to ordinary scaling

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.

    2015-10-01

    In this work we study the generalization of the problem considered in [Phys. Rev. E 91, 013002 (2015), 10.1103/PhysRevE.91.013002] to the case of finite correlation time of the environment (velocity) field. The model describes a vector (e.g., magnetic) field, passively advected by a strongly anisotropic turbulent flow. Inertial-range asymptotic behavior is studied by means of the field theoretic renormalization group and the operator product expansion. The advecting velocity field is Gaussian, with finite correlation time and preassigned pair correlation function. Due to the presence of distinguished direction n , all the multiloop diagrams in this model vanish, so that the results obtained are exact. The inertial-range behavior of the model is described by two regimes (the limits of vanishing or infinite correlation time) that correspond to the two nontrivial fixed points of the RG equations. Their stability depends on the relation between the exponents in the energy spectrum E ∝k⊥1 -ξ and the dispersion law ω ∝k⊥2 -η . In contrast to the well-known isotropic Kraichnan's model, where various correlation functions exhibit anomalous scaling behavior with infinite sets of anomalous exponents, here the corrections to ordinary scaling are polynomials of logarithms of the integral turbulence scale L .

  3. Reconstructions of solar irradiance on centennial time scales

    NASA Astrophysics Data System (ADS)

    Krivova, Natalie; Solanki, Sami K.; Dasi Espuig, Maria; Kok Leng, Yeo

    Solar irradiance is the main external source of energy to Earth's climate system. The record of direct measurements covering less than 40 years is too short to study solar influence on Earth's climate, which calls for reconstructions of solar irradiance into the past with the help of appropriate models. An obvious requirement to a competitive model is its ability to reproduce observed irradiance changes, and a successful example of such a model is presented by the SATIRE family of models. As most state-of-the-art models, SATIRE assumes that irradiance changes on time scales longer than approximately a day are caused by the evolving distribution of dark and bright magnetic features on the solar surface. The surface coverage by such features as a function of time is derived from solar observations. The choice of these depends on the time scale in question. Most accurate is the version of the model that employs full-disc spatially-resolved solar magnetograms and reproduces over 90% of the measured irradiance variation, including the overall decreasing trend in the total solar irradiance over the last four cycles. Since such magnetograms are only available for about four decades, reconstructions on time scales of centuries have to rely on disc-integrated proxies of solar magnetic activity, such as sunspot areas and numbers. Employing a surface flux transport model and sunspot observations as input, we have being able to produce synthetic magnetograms since 1700. This improves the temporal resolution of the irradiance reconstructions on centennial time scales. The most critical aspect of such reconstructions remains the uncertainty in the magnitude of the secular change.

  4. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of climate change on crop productivity in a watershed. The first was carried out by the large-scale crop model alone. The second was carried out by the integrated model of the large-scale crop model and the H08 model. The former projected that changes in temperature and precipitation due to future climate change would give rise to increasing the water stress in crops. Nevertheless, the latter projected that the increasing amount of agricultural water resources in the watershed would supply sufficient amount of water for irrigation, consequently reduce the water stress. The integrated model demonstrated the importance of taking into account the water circulation in watershed when predicting the regional crop production.

  5. The collaborative historical African rainfall model: description and evaluation

    USGS Publications Warehouse

    Funk, Christopher C.; Michaelsen, Joel C.; Verdin, James P.; Artan, Guleid A.; Husak, Gregory; Senay, Gabriel B.; Gadain, Hussein; Magadazire, Tamuka

    2003-01-01

    In Africa the variability of rainfall in space and time is high, and the general availability of historical gauge data is low. This makes many food security and hydrologic preparedness activities difficult. In order to help overcome this limitation, we have created the Collaborative Historical African Rainfall Model (CHARM). CHARM combines three sources of information: climatologically aided interpolated (CAI) rainfall grids (monthly/0.5° ), National Centers for Environmental Prediction reanalysis precipitation fields (daily/1.875° ) and orographic enhancement estimates (daily/0.1° ). The first set of weights scales the daily reanalysis precipitation fields to match the gridded CAI monthly rainfall time series. This produces data with a daily/0.5° resolution. A diagnostic model of orographic precipitation, VDELB—based on the dot-product of the surface wind V and terrain gradient (DEL) and atmospheric buoyancy B—is then used to estimate the precipitation enhancement produced by complex terrain. Although the data are produced on 0.1° grids to facilitate integration with satellite-based rainfall estimates, the ‘true’ resolution of the data will be less than this value, and varies with station density, topography, and precipitation dynamics. The CHARM is best suited, therefore, to applications that integrate rainfall or rainfall-driven model results over large regions. The CHARM time series is compared with three independent datasets: dekadal satellite-based rainfall estimates across the continent, dekadal interpolated gauge data in Mali, and daily interpolated gauge data in western Kenya. These comparisons suggest reasonable accuracies (standard errors of about half a standard deviation) when data are aggregated to regional scales, even at daily time steps. Thus constrained, numerical weather prediction precipitation fields do a reasonable job of representing large-scale diurnal variations.

  6. Time-scaling based sliding mode control for Neuromuscular Electrical Stimulation under uncertain relative degrees.

    PubMed

    Oliveira, Tiago Roux; Costa, Luiz Rennó; Catunda, João Marcos Yamasaki; Pino, Alexandre Visintainer; Barbosa, William; Souza, Márcio Nogueira de

    2017-06-01

    This paper addresses the application of the sliding mode approach to control the arm movements by artificial recruitment of muscles using Neuromuscular Electrical Stimulation (NMES). Such a technique allows the activation of motor nerves using surface electrodes. The goal of the proposed control system is to move the upper limbs of subjects through electrical stimulation to achieve a desired elbow angular displacement. Since the human neuro-motor system has individual characteristics, being time-varying, nonlinear and subject to uncertainties, the use of advanced robust control schemes may represent a better solution than classical Proportional-Integral (PI) controllers and model-based approaches, being simpler than more sophisticated strategies using fuzzy logic or neural networks usually applied in this control problem. The objective is the introduction of a new time-scaling base sliding mode control (SMC) strategy for NMES and its experimental evaluation. The main qualitative advantages of the proposed controller via time-scaling procedure are its independence of the knowledge of the plant relative degree and the design/tuning simplicity. The developed sliding mode strategy allows for chattering alleviation due to the impact of the integrator in smoothing the control signal. In addition, no differentiator is applied to construct the sliding surface. The stability analysis of the closed-loop system is also carried out by using singular perturbation methods. Experimental results are conducted with healthy volunteers as well as stroke patients. Quantitative results show a reduction of 45% in terms of root mean square (RMS) error (from 5.9° to [Formula: see text] ) in comparison with PI control scheme, which is similar to that obtained in the literature. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Multi-time scale energy management of wind farms based on comprehensive evaluation technology

    NASA Astrophysics Data System (ADS)

    Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.

    2017-11-01

    A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.

  8. A small-scale turbulence model

    NASA Technical Reports Server (NTRS)

    Lundgren, T. S.

    1992-01-01

    A model for the small-scale structure of turbulence is reformulated in such a way that it may be conveniently computed. The model is an ensemble of randomly oriented structured two dimensional vortices stretched by an axially symmetric strain flow. The energy spectrum of the resulting flow may be expressed as a time integral involving only the enstrophy spectrum of the time evolving two-dimensional cross section flow, which may be obtained numerically. Examples are given in which a k(exp -5/3) spectrum is obtained by this method without using large wave number asymptotic analysis. The k(exp -5/3) inertial range spectrum is shown to be related to the existence of a self-similar enstrophy preserving range in the two-dimensional enstrophy spectrum. The results are insensitive to time dependence of the strain-rate, including even intermittent on-or-off strains.

  9. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  10. Adaptation of ICT Integration Approach Scale to Kosovo Culture: A Study of Validity and Reliability Analysis

    ERIC Educational Resources Information Center

    Kervan, Serdan; Tezci, Erdogan

    2018-01-01

    The aim of this study is to adapt ICT integration approach scale to Kosovo culture, which measures ICT integration approaches of university faculty to teaching and learning process. The scale developed in Turkish has been translated into Albanian to provide linguistic equivalence. The survey was given to a total of 303 instructors [161 (53.1%)…

  11. Integration Science and Technology of Advanced Ceramics for Energy and Environmental Applications

    NASA Technical Reports Server (NTRS)

    Singh, M.

    2012-01-01

    The discovery of new and innovative materials has been known to culminate in major turning points in human history. The transformative impact and functional manifestation of new materials have been demonstrated in every historical era by their integration into new products, systems, assemblies, and devices. In modern times, the integration of new materials into usable products has a special relevance for the technological development and economic competitiveness of industrial societies. Advanced ceramic technologies dramatically impact the energy and environmental landscape due to potential wide scale applications in all aspects of energy production, storage, distribution, conservation, and efficiency. Examples include gas turbine propulsion systems, fuel cells, thermoelectrics, photovoltaics, distribution and transmission systems based on superconductors, nuclear power generation, and waste disposal. Robust ceramic integration technologies enable hierarchical design and manufacturing of intricate ceramic components starting with geometrically simpler units that are subsequently joined to themselves and/or to metals to create components with progressively higher levels of complexity and functionality. However, for the development of robust and reliable integrated systems with optimum performance under different operating conditions, the detailed understanding of various thermochemical and thermomechanical factors is critical. Different approaches are required for the integration of ceramic-metal and ceramic-ceramic systems across length scales (macro to nano). In this presentation, a few examples of integration of ceramic to metals and ceramic to ceramic systems will be presented. Various challenges and opportunities in design, fabrication, and testing of integrated similar (ceramic-ceramic) and dissimilar (ceramic-metal) material systems will be discussed. Potential opportunities and need for the development of innovative design philosophies, approaches, and integrated system testing under simulated application conditions will also be presented.

  12. A fast low-power optical memory based on coupled micro-ring lasers

    NASA Astrophysics Data System (ADS)

    Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.

    2004-11-01

    The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.

  13. The Education and Public Engagement (EPE) Component of the Ocean Observatories Initiative (OOI): Enabling Near Real-Time Data Use in Undergraduate Classrooms

    NASA Astrophysics Data System (ADS)

    Glenn, S. M.; Companion, C.; Crowley, M.; deCharon, A.; Fundis, A. T.; Kilb, D. L.; Levenson, S.; Lichtenwalner, C. S.; McCurdy, A.; McDonnell, J. D.; Overoye, D.; Risien, C. M.; Rude, A.; Wieclawek, J., III

    2011-12-01

    The National Science Foundation's Ocean Observatories Initiative (OOI) is constructing observational and computer infrastructure that will provide sustained ocean measurements to study climate variability, ocean circulation, ecosystem dynamics, air-sea exchange, seafloor processes, and plate-scale geodynamics over the next ~25-30 years. To accomplish this, the Consortium for Ocean Leadership established four Implementing Organizations: (1) Regional Scale Nodes; (2) Coastal and Global Scale Nodes; (3) Cyberinfrastructure (CI); and (4) Education and Public Engagement (EPE). The EPE, which we represent, was just recently established to provide a new layer of cyber-interactivity for educators to bring near real-time data, images and videos of our Earth's oceans into their learning environments. Our focus over the next four years is engaging educators of undergraduates and free-choice learners. Demonstration projects of the OOI capabilities will use an Integrated Education Toolkit to access OOI data through the Cyberinfrastructure's On Demand Measurement Processing capability. We will present our plans to develop six education infrastructure software modules: Education Web Services (middleware), Visualization Tools, Concept Map and Lab/Lesson Builders, Collaboration Tools, and an Education Resources Database. The software release of these tools is staggered to coincide with other major OOI releases. The first release will include stand-alone versions of the first four EPE modules (Fall 2012). Next, all six EPE modules will be integrated within the OOI cyber-framework (Fall 2013). The last release will include advanced capabilities for all six modules within a collaborative network that leverages the CI's Integrated Observatory Network (Fall 2014). We are looking for undergraduate and informal science educators to provide feedback and guidance on the project, please contact us if you are interested in partnering with us.

  14. Challenges and opportunities to improve understanding on wetland ecosystem and function at the local catchment scale: data fusion, data-model integration, and prediction uncertainty.

    NASA Astrophysics Data System (ADS)

    Yeo, I. Y.

    2016-12-01

    Wetlands are valuable landscape features that provide important ecosystem functions and services. The ecosystem processes in wetlands are highly dependent on the hydrology. However, hydroperiod (i.e., change dynamics in inundation extent) is highly variable spatially and temporarily, and extremely difficult to predict owing to the complexity in hydrological processes within wetlands and its interaction with surrounding areas. This study reports the challenges and progress in assessing the catchment scale benefits of wetlands to regulate hydrological regime and water quality improvement in agricultural watershed. A process-based watershed model, Soil and Water Assessment Tool (SWAT) was improved to simulate the cumulative impacts of wetlands on downstream. Newly developed remote sensing products from LiDAR intensity and time series Landsat records, which show the inter-annual changes in fraction inundation, were utilized to describe the change status of inundated areas within forested wetlands, develop spatially varying wetland parameters, and evaluate the predicted inundated areas at the landscape level. We outline the challenges on developing the time series inundation mapping products at a high spatial and temporal resolution and reconciling the catchment scale model with the moderate remote sensing products. We then highlight the importance of integrating spatialized information to model calibration and evaluation to address the issues of equi-finality and prediction uncertainty. This integrated approach was applied to the upper region of Choptank River Watershed, the agricultural watershed in the Coastal Plain of Chesapeake Bay Watershed (in US). In the Mid- Atlantic US, the provision of pollution regulation services provided by wetlands has been emphasized due to declining water quality within the Chesapeake Bay and watersheds, and the preservation and restoration of wetlands has become the top priority to manage nonpoint source water pollution.

  15. A Pseudo-Vertical Equilibrium Model for Slow Gravity Drainage Dynamics

    NASA Astrophysics Data System (ADS)

    Becker, Beatrix; Guo, Bo; Bandilla, Karl; Celia, Michael A.; Flemisch, Bernd; Helmig, Rainer

    2017-12-01

    Vertical equilibrium (VE) models are computationally efficient and have been widely used for modeling fluid migration in the subsurface. However, they rely on the assumption of instant gravity segregation of the two fluid phases which may not be valid especially for systems that have very slow drainage at low wetting phase saturations. In these cases, the time scale for the wetting phase to reach vertical equilibrium can be several orders of magnitude larger than the time scale of interest, rendering conventional VE models unsuitable. Here we present a pseudo-VE model that relaxes the assumption of instant segregation of the two fluid phases by applying a pseudo-residual saturation inside the plume of the injected fluid that declines over time due to slow vertical drainage. This pseudo-VE model is cast in a multiscale framework for vertically integrated models with the vertical drainage solved as a fine-scale problem. Two types of fine-scale models are developed for the vertical drainage, which lead to two pseudo-VE models. Comparisons with a conventional VE model and a full multidimensional model show that the pseudo-VE models have much wider applicability than the conventional VE model while maintaining the computational benefit of the conventional VE model.

  16. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  17. What is the Anthropocene?

    USGS Publications Warehouse

    Edwards, Lucy E.

    2015-01-01

    So far, however, the term “Anthropocene” has not been integrated into the official Geologic Time Scale, which geologists use to divide the past into named blocks based on the rock record. In 2016 or thereabouts, the International Commission on Stratigraphy—the scientific body that maintains the official Geologic Time Scale—will consider a proposal to formalize a definition of this term. It’s a decision that has both semantic and scientific implications and may have legal implications as well.

  18. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  19. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  20. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  1. The Virtual Watershed Observatory: Cyberinfrastructure for Model-Data Integration and Access

    NASA Astrophysics Data System (ADS)

    Duffy, C.; Leonard, L. N.; Giles, L.; Bhatt, G.; Yu, X.

    2011-12-01

    The Virtual Watershed Observatory (VWO) is a concept where scientists, water managers, educators and the general public can create a virtual observatory from integrated hydrologic model results, national databases and historical or real-time observations via web services. In this paper, we propose a prototype for automated and virtualized web services software using national data products for climate reanalysis, soils, geology, terrain and land cover. The VWO has the broad purpose of making accessible water resource simulations, real-time data assimilation, calibration and archival at the scale of HUC 12 watersheds (Hydrologic Unit Code) anywhere in the continental US. Our prototype for model-data integration focuses on creating tools for fast data storage from selected national databases, as well as the computational resources necessary for a dynamic, distributed watershed simulation. The paper will describe cyberinfrastructure tools and workflow that attempts to resolve the problem of model-data accessibility and scalability such that individuals, research teams, managers and educators can create a WVO in a desired context. Examples are given for the NSF-funded Shale Hills Critical Zone Observatory and the European Critical Zone Observatories within the SoilTrEC project. In the future implementation of WVO services will benefit from the development of a cloud cyber infrastructure as the prototype evolves to data and model intensive computation for continental scale water resource predictions.

  2. Field-scale comparison of frequency- and time-domain spectral induced polarization

    NASA Astrophysics Data System (ADS)

    Maurya, P. K.; Fiandaca, G.; Christiansen, A. V.; Auken, E.

    2018-05-01

    In this paper we present a comparison study of the time-domain (TD) and frequency-domain (FD) spectral induced polarization (IP) methods in terms of acquisition time, data quality, and spectral information retrieved from inversion. We collected TDIP and FDIP surface measurements on three profiles with identical electrode setups, at two different field sites with different lithology. In addition, TDIP data were collected in two boreholes using the El-Log drilling technique, in which apparent formation resistivity and chargeability values are measured during drilling using electrodes integrated within the stem auger.

  3. Stellar granulation as seen in disk-integrated intensity. II. Theoretical scaling relations compared with observations

    NASA Astrophysics Data System (ADS)

    Samadi, R.; Belkacem, K.; Ludwig, H.-G.; Caffau, E.; Campante, T. L.; Davies, G. R.; Kallinger, T.; Lund, M. N.; Mosser, B.; Baglin, A.; Mathur, S.; Garcia, R. A.

    2013-11-01

    Context. A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (τeff) and the root-mean-square (rms) brightness fluctuations (σ) associated with the granulation scale as a function of the peak frequency (νmax) of the solar-like oscillations. Aims: We aim at providing a theoretical background to the observed scaling relations based on a model developed in Paper I. Methods: We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model published in Paper I. For each PDS we derived the associated characteristic time (τeff) and the rms brightness fluctuations (σ) and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the measurements made on a large set of Kepler targets. Results: We derive theoretical scaling relations for τeff and σ, which show the same dependence on νmax as the observed scaling relations. In addition, we show that these quantities also scale as a function of the turbulent Mach number (ℳa) estimated at the photosphere. The theoretical scaling relations for τeff and σ match the observations well on a global scale. Quantitatively, the remaining discrepancies with the observations are found to be much smaller than previous theoretical calculations made for red giants. Conclusions: Our modelling provides additional theoretical support for the observed variations of σ and τeff with νmax. It also highlights the important role of ℳa in controlling the properties of the stellar granulation. However, the observations made with Kepler on a wide variety of stars cannot confirm the dependence of our scaling relations on ℳa. Measurements of the granulation background and detections of solar-like oscillations in a statistically sufficient number of cool dwarf stars will be required for confirming the dependence of the theoretical scaling relations with ℳa. Appendices are available in electronic form at http://www.aanda.org

  4. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  5. Humidity profiles over the ocean

    NASA Technical Reports Server (NTRS)

    Liu, W. T.; Tang, Wenqing; Niiler, Pearn P.

    1991-01-01

    The variabilities of atmospheric humidity profile over oceans from daily to interannual time scales were examined using 9 years of daily and semidaily radiosonde soundings at island stations extending from the Arctic to the South Pacific. The relative humidity profiles were found to have considerable temporal and geographic variabilities, contrary to the prevalent assumption. Principal component analysis on the profiles of specific humidity were used to examine the applicability of a relation between the surface-level humidity and the integrated water vapor; this relation has been used to estimate large-scale evaporation from satellite data. The first principal component was found to correlate almost perfectly with the integrated water vapor. The fractional variance represented by this mode increases with increasing period. It reaches approximately 90 percent at two weeks and decreases sharply, below one week, down to approximately 60 percent at the daily period. At low frequencies, the integrated water vapor appeared to be an adequate estimator of the humidity profile and the surface-level humidity. At periods shorter than a week, more than one independent estimator is needed.

  6. Circular Dichroism Control of Tungsten Diselenide (WSe2) Atomic Layers with Plasmonic Metamolecules.

    PubMed

    Lin, Hsiang-Ting; Chang, Chiao-Yun; Cheng, Pi-Ju; Li, Ming-Yang; Cheng, Chia-Chin; Chang, Shu-Wei; Li, Lance L J; Chu, Chih-Wei; Wei, Pei-Kuen; Shih, Min-Hsiung

    2018-05-09

    Controlling circularly polarized (CP) states of light is critical to the development of functional devices for key and emerging applications such as display technology and quantum communication, and the compact circular polarization-tunable photon source is one critical element to realize the applications in the chip-scale integrated system. The atomic layers of transition metal dichalcogenides (TMDCs) exhibit intrinsic CP emissions and are potential chiroptical materials for ultrathin CP photon sources. In this work, we demonstrated CP photon sources of TMDCs with device thicknesses approximately 50 nm. CP photoluminescence from the atomic layers of tungsten diselenide (WSe 2 ) was precisely controlled with chiral metamolecules (MMs), and the optical chirality of WSe 2 was enhanced more than 4 times by integrating with the MMs. Both the enhanced and reversed circular dichroisms had been achieved. Through integrations of the novel gain material and plasmonic structure which are both low-dimensional, a compact device capable of efficiently manipulating emissions of CP photon was realized. These ultrathin devices are suitable for important applications such as the optical information technology and chip-scale biosensing.

  7. An integrated semiconductor device enabling non-optical genome sequencing.

    PubMed

    Rothberg, Jonathan M; Hinz, Wolfgang; Rearick, Todd M; Schultz, Jonathan; Mileski, William; Davey, Mel; Leamon, John H; Johnson, Kim; Milgrew, Mark J; Edwards, Matthew; Hoon, Jeremy; Simons, Jan F; Marran, David; Myers, Jason W; Davidson, John F; Branting, Annika; Nobile, John R; Puc, Bernard P; Light, David; Clark, Travis A; Huber, Martin; Branciforte, Jeffrey T; Stoner, Isaac B; Cawley, Simon E; Lyons, Michael; Fu, Yutao; Homer, Nils; Sedova, Marina; Miao, Xin; Reed, Brian; Sabina, Jeffrey; Feierstein, Erika; Schorn, Michelle; Alanjary, Mohammad; Dimalanta, Eileen; Dressman, Devin; Kasinskas, Rachel; Sokolsky, Tanya; Fidanza, Jacqueline A; Namsaraev, Eugeni; McKernan, Kevin J; Williams, Alan; Roth, G Thomas; Bustillo, James

    2011-07-20

    The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome.

  8. Integrated Mid-Continent Carbon Capture, Sequestration & Enhanced Oil Recovery Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian McPherson

    2010-08-31

    A consortium of research partners led by the Southwest Regional Partnership on Carbon Sequestration and industry partners, including CAP CO2 LLC, Blue Source LLC, Coffeyville Resources, Nitrogen Fertilizers LLC, Ash Grove Cement Company, Kansas Ethanol LLC, Headwaters Clean Carbon Services, Black & Veatch, and Schlumberger Carbon Services, conducted a feasibility study of a large-scale CCS commercialization project that included large-scale CO{sub 2} sources. The overall objective of this project, entitled the 'Integrated Mid-Continent Carbon Capture, Sequestration and Enhanced Oil Recovery Project' was to design an integrated system of US mid-continent industrial CO{sub 2} sources with CO{sub 2} capture, and geologicmore » sequestration in deep saline formations and in oil field reservoirs with concomitant EOR. Findings of this project suggest that deep saline sequestration in the mid-continent region is not feasible without major financial incentives, such as tax credits or otherwise, that do not exist at this time. However, results of the analysis suggest that enhanced oil recovery with carbon sequestration is indeed feasible and practical for specific types of geologic settings in the Midwestern U.S.« less

  9. Analytical Theory of the Destruction Terms in Dissipation Rate Transport Equations

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1996-01-01

    Modeled dissipation rate transport equations are often derived by invoking various hypotheses to close correlations in the corresponding exact equations. D. C. Leslie suggested that these models might be derived instead from Kraichnan's wavenumber space integrals for inertial range transport power. This suggestion is applied to the destruction terms in the dissipation rate equations for incompressible turbulence, buoyant turbulence, rotating incompressible turbulence, and rotating buoyant turbulence. Model constants like C(epsilon 2) are expressed as integrals; convergence of these integrals implies the absence of Reynolds number dependence in the corresponding destruction term. The dependence of C(epsilon 2) on rotation rate emerges naturally; sensitization of the modeled dissipation rate equation to rotation is not required. A buoyancy related effect which is absent in the exact transport equation for temperature variance dissipation, but which sometimes improves computational predictions, also arises naturally. Both the presence of this effect and the appropriate time scale in the modeled transport equation depend on whether Bolgiano or Kolmogorov inertial range scaling applies. A simple application of these methods leads to a preliminary, dissipation rate equation for rotating buoyant turbulence.

  10. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    PubMed

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  11. A multi-scale approach to designing therapeutics for tuberculosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  12. A multi-scale approach to designing therapeutics for tuberculosis

    DOE PAGES

    Linderman, Jennifer J.; Cilfone, Nicholas A.; Pienaar, Elsje; ...

    2015-04-20

    Approximately one third of the world’s population is infected with Mycobacterium tuberculosis. Limited information about how the immune system fights M. tuberculosis and what constitutes protection from the bacteria impact our ability to develop effective therapies for tuberculosis. We present an in vivo systems biology approach that integrates data from multiple model systems and over multiple length and time scales into a comprehensive multi-scale and multi-compartment view of the in vivo immune response to M. tuberculosis. Lastly, we describe computational models that can be used to study (a) immunomodulation with the cytokines tumor necrosis factor and interleukin 10, (b) oralmore » and inhaled antibiotics, and (c) the effect of vaccination.« less

  13. An experimental system for flood risk forecasting and monitoring at global scale

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Alfieri, Lorenzo; Kalas, Milan; Lorini, Valerio; Salamon, Peter

    2017-04-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by a wide range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasting, combining streamflow estimations with expected inundated areas and flood impacts. Finally, emerging technologies such as crowdsourcing and social media monitoring can play a crucial role in flood disaster management and preparedness. Here, we present some recent advances of an experimental procedure for near-real time flood mapping and impact assessment. The procedure translates in near real-time the daily streamflow forecasts issued by the Global Flood Awareness System (GloFAS) into event-based flood hazard maps, which are then combined with exposure and vulnerability information at global scale to derive risk forecast. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To increase the reliability of our forecasts we propose the integration of model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification and correction of impact forecasts. Finally, we present the results of preliminary tests which show the potential of the proposed procedure in supporting emergency response and management.

  14. Geometrical Patterning of Super-Hydrophobic Biosensing Transistors Enables Space and Time Resolved Analysis of Biological Mixtures.

    PubMed

    Gentile, Francesco; Ferrara, Lorenzo; Villani, Marco; Bettelli, Manuele; Iannotta, Salvatore; Zappettini, Andrea; Cesarelli, Mario; Di Fabrizio, Enzo; Coppedè, Nicola

    2016-01-12

    PSS is a conductive polymer that can be integrated into last generation Organic Electrochemical Transistor (OECT) devices for biological inspection, identification and analysis. While a variety of reports in literature demonstrated the chemical and biological sensitivity of these devices, still their ability in resolving complex mixtures remains controversial. Similar OECT devices display good time dynamics behavior but lack spatial resolution. In this work, we integrated PSS with patterns of super-hydrophobic pillars in which a finite number of those pillars is independently controlled for site-selective measurement of a solution. We obtained a multifunctional, hierarchical OECT device that bridges the micro- to the nano-scales for specific, combined time and space resolved analysis of the sample. Due to super-hydrophobic surface properties, the biological species in the drop are driven by convection, diffusion, and the externally applied electric field: the balance/unbalance between these forces will cause the molecules to be transported differently within its volume depending on particle size thus realizing a size-selective separation. Within this framework, the separation and identification of two different molecules, namely Cetyl Trimethyl Ammonium Bromid (CTAB) and adrenaline, in a biological mixture have been demonstrated, showing that geometrical control at the micro-nano scale impart unprecedented selectivity to the devices.

  15. Geometrical Patterning of Super-Hydrophobic Biosensing Transistors Enables Space and Time Resolved Analysis of Biological Mixtures

    PubMed Central

    Gentile, Francesco; Ferrara, Lorenzo; Villani, Marco; Bettelli, Manuele; Iannotta, Salvatore; Zappettini, Andrea; Cesarelli, Mario; Di Fabrizio, Enzo; Coppedè, Nicola

    2016-01-01

    PEDOT:PSS is a conductive polymer that can be integrated into last generation Organic Electrochemical Transistor (OECT) devices for biological inspection, identification and analysis. While a variety of reports in literature demonstrated the chemical and biological sensitivity of these devices, still their ability in resolving complex mixtures remains controversial. Similar OECT devices display good time dynamics behavior but lack spatial resolution. In this work, we integrated PEDOT:PSS with patterns of super-hydrophobic pillars in which a finite number of those pillars is independently controlled for site-selective measurement of a solution. We obtained a multifunctional, hierarchical OECT device that bridges the micro- to the nano-scales for specific, combined time and space resolved analysis of the sample. Due to super-hydrophobic surface properties, the biological species in the drop are driven by convection, diffusion, and the externally applied electric field: the balance/unbalance between these forces will cause the molecules to be transported differently within its volume depending on particle size thus realizing a size-selective separation. Within this framework, the separation and identification of two different molecules, namely Cetyl Trimethyl Ammonium Bromid (CTAB) and adrenaline, in a biological mixture have been demonstrated, showing that geometrical control at the micro-nano scale impart unprecedented selectivity to the devices. PMID:26753611

  16. Sharing Earth Observation Data When Health Management

    NASA Astrophysics Data System (ADS)

    Cox, E. L., Jr.

    2015-12-01

    While the global community is struck by pandemics and epidemics from time to time the ability to fully utilize earth observations and integrate environmental information has been limited - until recently. Mature science understanding is allowing new levels of situational awareness be possible when and if the relevant data is available and shared in a timely and useable manner. Satellite and other remote sensing tools have been used to observe, monitor, assess and predict weather and water impacts for decades. In the last few years much of this has included a focus on the ability to monitor changes on climate scales that suggest changes in quantity and quality of ecosystem resources or the "one-health" approach where trans-disciplinary links between environment, animal and vegetative health may provide indications of best ways to manage susceptibility to infectious disease or outbreaks. But the scale of impacts and availability of information from earth observing satellites, airborne platforms, health tracking systems and surveillance networks offer new integrated tools. This presentation will describe several recent events, such as Superstorm Sandy in the United States and the Ebola outbreak in Africa, where public health and health infrastructure have been exposed to environmental hazards and lessons learned from disaster response in the ability to share data have been effective in risk reduction.

  17. Simulations of Dissipative Circular Restricted Three-body Problems Using the Velocity-scaling Correction Method

    NASA Astrophysics Data System (ADS)

    Wang, Shoucheng; Huang, Guoqing; Wu, Xin

    2018-02-01

    In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.

  18. Geometrical Patterning of Super-Hydrophobic Biosensing Transistors Enables Space and Time Resolved Analysis of Biological Mixtures

    NASA Astrophysics Data System (ADS)

    Gentile, Francesco; Ferrara, Lorenzo; Villani, Marco; Bettelli, Manuele; Iannotta, Salvatore; Zappettini, Andrea; Cesarelli, Mario; di Fabrizio, Enzo; Coppedè, Nicola

    2016-01-01

    PEDOT:PSS is a conductive polymer that can be integrated into last generation Organic Electrochemical Transistor (OECT) devices for biological inspection, identification and analysis. While a variety of reports in literature demonstrated the chemical and biological sensitivity of these devices, still their ability in resolving complex mixtures remains controversial. Similar OECT devices display good time dynamics behavior but lack spatial resolution. In this work, we integrated PEDOT:PSS with patterns of super-hydrophobic pillars in which a finite number of those pillars is independently controlled for site-selective measurement of a solution. We obtained a multifunctional, hierarchical OECT device that bridges the micro- to the nano-scales for specific, combined time and space resolved analysis of the sample. Due to super-hydrophobic surface properties, the biological species in the drop are driven by convection, diffusion, and the externally applied electric field: the balance/unbalance between these forces will cause the molecules to be transported differently within its volume depending on particle size thus realizing a size-selective separation. Within this framework, the separation and identification of two different molecules, namely Cetyl Trimethyl Ammonium Bromid (CTAB) and adrenaline, in a biological mixture have been demonstrated, showing that geometrical control at the micro-nano scale impart unprecedented selectivity to the devices.

  19. A fully integrated standalone portable cavity ringdown breath acetone analyzer.

    PubMed

    Sun, Meixiu; Jiang, Chenyu; Gong, Zhiyong; Zhao, Xiaomeng; Chen, Zhuying; Wang, Zhennan; Kang, Meiling; Li, Yingxin; Wang, Chuji

    2015-09-01

    Breath analysis is a promising new technique for nonintrusive disease diagnosis and metabolic status monitoring. One challenging issue in using a breath biomarker for potential particular disease screening is to find a quantitative relationship between the concentration of the breath biomarker and clinical diagnostic parameters of the specific disease. In order to address this issue, we need a new instrument that is capable of conducting real-time, online breath analysis with high data throughput, so that a large scale of clinical test (more subjects) can be achieved in a short period of time. In this work, we report a fully integrated, standalone, portable analyzer based on the cavity ringdown spectroscopy technique for near-real time, online breath acetone measurements. The performance of the portable analyzer in measurements of breath acetone was interrogated and validated by using the certificated gas chromatography-mass spectrometry. The results show that this new analyzer is useful for reliable online (online introduction of a breath sample without pre-treatment) breath acetone analysis with high sensitivity (57 ppb) and high data throughput (one data per second). Subsequently, the validated breath analyzer was employed for acetone measurements in 119 human subjects under various situations. The instrument design, packaging, specifications, and future improvements were also described. From an optical ringdown cavity operated by the lab-set electronics reported previously to this fully integrated standalone new instrument, we have enabled a new scientific tool suited for large scales of breath acetone analysis and created an instrument platform that can even be adopted for study of other breath biomarkers by using different lasers and ringdown mirrors covering corresponding spectral fingerprints.

  20. A fully integrated standalone portable cavity ringdown breath acetone analyzer

    NASA Astrophysics Data System (ADS)

    Sun, Meixiu; Jiang, Chenyu; Gong, Zhiyong; Zhao, Xiaomeng; Chen, Zhuying; Wang, Zhennan; Kang, Meiling; Li, Yingxin; Wang, Chuji

    2015-09-01

    Breath analysis is a promising new technique for nonintrusive disease diagnosis and metabolic status monitoring. One challenging issue in using a breath biomarker for potential particular disease screening is to find a quantitative relationship between the concentration of the breath biomarker and clinical diagnostic parameters of the specific disease. In order to address this issue, we need a new instrument that is capable of conducting real-time, online breath analysis with high data throughput, so that a large scale of clinical test (more subjects) can be achieved in a short period of time. In this work, we report a fully integrated, standalone, portable analyzer based on the cavity ringdown spectroscopy technique for near-real time, online breath acetone measurements. The performance of the portable analyzer in measurements of breath acetone was interrogated and validated by using the certificated gas chromatography-mass spectrometry. The results show that this new analyzer is useful for reliable online (online introduction of a breath sample without pre-treatment) breath acetone analysis with high sensitivity (57 ppb) and high data throughput (one data per second). Subsequently, the validated breath analyzer was employed for acetone measurements in 119 human subjects under various situations. The instrument design, packaging, specifications, and future improvements were also described. From an optical ringdown cavity operated by the lab-set electronics reported previously to this fully integrated standalone new instrument, we have enabled a new scientific tool suited for large scales of breath acetone analysis and created an instrument platform that can even be adopted for study of other breath biomarkers by using different lasers and ringdown mirrors covering corresponding spectral fingerprints.

  1. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  2. Translational bioinformatics in the era of real-time biomedical, health care and wellness data streams

    PubMed Central

    Miotto, Riccardo; Glicksberg, Benjamin S.; Morgan, Joseph W.; Dudley, Joel T.

    2017-01-01

    Monitoring and modeling biomedical, health care and wellness data from individuals and converging data on a population scale have tremendous potential to improve understanding of the transition to the healthy state of human physiology to disease setting. Wellness monitoring devices and companion software applications capable of generating alerts and sharing data with health care providers or social networks are now available. The accessibility and clinical utility of such data for disease or wellness research are currently limited. Designing methods for streaming data capture, real-time data aggregation, machine learning, predictive analytics and visualization solutions to integrate wellness or health monitoring data elements with the electronic medical records (EMRs) maintained by health care providers permits better utilization. Integration of population-scale biomedical, health care and wellness data would help to stratify patients for active health management and to understand clinically asymptomatic patients and underlying illness trajectories. In this article, we discuss various health-monitoring devices, their ability to capture the unique state of health represented in a patient and their application in individualized diagnostics, prognosis, clinical or wellness intervention. We also discuss examples of translational bioinformatics approaches to integrating patient-generated data with existing EMRs, personal health records, patient portals and clinical data repositories. Briefly, translational bioinformatics methods, tools and resources are at the center of these advances in implementing real-time biomedical and health care analytics in the clinical setting. Furthermore, these advances are poised to play a significant role in clinical decision-making and implementation of data-driven medicine and wellness care. PMID:26876889

  3. Large scale structure formation of the normal branch in the DGP brane world model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon

    2008-06-15

    In this paper, we study the large scale structure formation of the normal branch in the DGP model (Dvail, Gabadadze, and Porrati brane world model) by applying the scaling method developed by Sawicki, Song, and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is a detectable departure of perturbed gravitational potential from the cold dark matter model with vacuum energy even at the minimal deviation of the effective equation of state w{sub eff} below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model.more » Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.« less

  4. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  5. Conjugate-Gradient Algorithms For Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1993-01-01

    Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.

  6. Technical Assessment: Integrated Photonics

    DTIC Science & Technology

    2015-10-01

    in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of

  7. Compulsory Book Reading at School and within Leisure

    ERIC Educational Resources Information Center

    Pavlovic, Slavica

    2015-01-01

    This paper deals with attitudes of secondary school pupils towards compulsory book reading at school, being the integral part of the subject Croat language and literature teaching subject, and its possible impact on their book (not-)reading in their leisure time. It is based on the research carried out through five-point Likert-type scale in…

  8. Credit with Education and Title II Programs. Technical Note. Food and Nutrition Technical Assistance.

    ERIC Educational Resources Information Center

    Reid, Helen

    "Credit with Education" is a way to provide self-financing microfinance (or small-scale banking) to women, primarily in very poor rural areas, while at the same time providing education for business and family survival. Within the village banking environment, attempts to integrate education with village bank meetings have fallen into two…

  9. Meeting the Challenge of IS Curriculum Modernization: A Guide to Overhaul, Integration, and Continuous Improvement

    ERIC Educational Resources Information Center

    McGann, Sean T.; Frost, Raymond D.; Matta, Vic; Huang, Wayne

    2007-01-01

    Information Systems (IS) departments are facing challenging times as enrollments decline and the field evolves, thus necessitating large-scale curriculum changes. Our experience shows that many IS departments are in such a predicament as they have not evolved content quickly enough to keep it relevant, they do a poor job coordinating curriculum…

  10. Piecewise Linear Approach for Timing Simulation of VLSI (Very-Large-Scale-Integrated) Circuits on Serial and Parallel Computers.

    DTIC Science & Technology

    1987-12-01

    8217ftp.. *,*IS ~. ~bw ~ ft.. p ’ft ’ft ft.. ’ft *I~ P* ’ft ’p 0n-I ci via 1 ca j I .11’ ft~ ’ fttH vialca *- ’ft ft..I ft. ’ft ft.. --ft ..ft ’ft ftp

  11. Decay of passive scalar fluctuations in axisymmetric turbulence

    NASA Astrophysics Data System (ADS)

    Yoshimatsu, Katsunori; Davidson, Peter A.; Kaneda, Yukio

    2016-11-01

    Passive scalar fluctuations in axisymmetric Saffman turbulence are examined theoretically and numerically. Theoretical predictions are verified by direct numerical simulation (DNS). According to the DNS, self-similar decay of the turbulence and the persistency of the large-scale anisotropy are found for its fully developed turbulence. The DNS confirms the time-independence of the Corrsin integral.

  12. Advancing effects analysis for integrated, large-scale wildfire risk assessment

    Treesearch

    Matthew P. Thompson; David E. Calkin; Julie W. Gilbertson-Day; Alan A. Ager

    2011-01-01

    In this article, we describe the design and development of a quantitative, geospatial risk assessment tool intended to facilitate monitoring trends in wildfire risk over time and to provide information useful in prioritizing fuels treatments and mitigation measures. The research effort is designed to develop, from a strategic view, a first approximation of how both...

  13. Using water stable isotopes to assess evaporation and water residence time of lakes in EPA’s National Lakes Assessment.

    EPA Science Inventory

    Stable isotopes of water (18O and 2H) can be very useful in large-scale monitoring programs because water samples are easy to collect and water isotopes integrate information about basic hydrological processes such as evaporation as a percentage of inflow (E/I), w...

  14. LSI logic for phase-control rectifiers

    NASA Technical Reports Server (NTRS)

    Dolland, C.

    1980-01-01

    Signals for controlling phase-controlled rectifier circuit are generated by combinatorial logic than can be implemented in large-scale integration (LSI). LSI circuit saves space, weight, and assembly time compared to previous controls that employ one-shot multivibrators, latches, and capacitors. LSI logic functions by sensing three phases of ac power source and by comparing actual currents with intended currents.

  15. Multimodal integration of fMRI and EEG data for high spatial and temporal resolution analysis of brain networks

    PubMed Central

    Mantini, D.; Marzetti, L.; Corbetta, M.; Romani, G.L.; Del Gratta, C.

    2017-01-01

    Two major non-invasive brain mapping techniques, electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), have complementary advantages with regard to their spatial and temporal resolution. We propose an approach based on the integration of EEG and fMRI, enabling the EEG temporal dynamics of information processing to be characterized within spatially well-defined fMRI large-scale networks. First, the fMRI data are decomposed into networks by means of spatial independent component analysis (sICA), and those associated with intrinsic activity and/or responding to task performance are selected using information from the related time-courses. Next, the EEG data over all sensors are averaged with respect to event timing, thus calculating event-related potentials (ERPs). The ERPs are subjected to temporal ICA (tICA), and the resulting components are localized with the weighted minimum norm (WMNLS) algorithm using the task-related fMRI networks as priors. Finally, the temporal contribution of each ERP component in the areas belonging to the fMRI large-scale networks is estimated. The proposed approach has been evaluated on visual target detection data. Our results confirm that two different components, commonly observed in EEG when presenting novel and salient stimuli respectively, are related to the neuronal activation in large-scale networks, operating at different latencies and associated with different functional processes. PMID:20052528

  16. Frigatebird behaviour at the ocean-atmosphere interface: integrating animal behaviour with multi-satellite data.

    PubMed

    De Monte, Silvia; Cotté, Cedric; d'Ovidio, Francesco; Lévy, Marina; Le Corre, Matthieu; Weimerskirch, Henri

    2012-12-07

    Marine top predators such as seabirds are useful indicators of the integrated response of the marine ecosystem to environmental variability at different scales. Large-scale physical gradients constrain seabird habitat. Birds however respond behaviourally to physical heterogeneity at much smaller scales. Here, we use, for the first time, three-dimensional GPS tracking of a seabird, the great frigatebird (Fregata minor), in the Mozambique Channel. These data, which provide at the same time high-resolution vertical and horizontal positions, allow us to relate the behaviour of frigatebirds to the physical environment at the (sub-)mesoscale (10-100 km, days-weeks). Behavioural patterns are classified based on the birds' vertical displacement (e.g. fast/slow ascents and descents), and are overlaid on maps of physical properties of the ocean-atmosphere interface, obtained by a nonlinear analysis of multi-satellite data. We find that frigatebirds modify their behaviours concurrently to transport and thermal fronts. Our results suggest that the birds' co-occurrence with these structures is a consequence of their search not only for food (preferentially searched over thermal fronts) but also for upward vertical wind. This is also supported by their relationship with mesoscale patterns of wind divergence. Our multi-disciplinary method can be applied to forthcoming high-resolution animal tracking data, and aims to provide a mechanistic understanding of animals' habitat choice and of marine ecosystem responses to environmental change.

  17. The Resource Consumption Principle: Attention and Memory in Volumes of Neural Tissue

    NASA Astrophysics Data System (ADS)

    Montague, P. Read

    1996-04-01

    In the cerebral cortex, the small volume of the extracellular space in relation to the volume enclosed by synapses suggests an important functional role for this relationship. It is well known that there are atoms and molecules in the extracellular space that are absolutely necessary for synapses to function (e.g., calcium). I propose here the hypothesis that the rapid shift of these atoms and molecules from extracellular to intrasynaptic compartments represents the consumption of a shared, limited resource available to local volumes of neural tissue. Such consumption results in a dramatic competition among synapses for resources necessary for their function. In this paper, I explore a theory in which this resource consumption plays a critical role in the way local volumes of neural tissue operate. On short time scales, this principle of resource consumption permits a tissue volume to choose those synapses that function in a particular context and thereby helps to integrate the many neural signals that impinge on a tissue volume at any given moment. On longer time scales, the same principle aids in the stable storage and recall of information. The theory provides one framework for understanding how cerebral cortical tissue volumes integrate, attend to, store, and recall information. In this account, the capacity of neural tissue to attend to stimuli is intimately tied to the way tissue volumes are organized at fine spatial scales.

  18. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    NASA Astrophysics Data System (ADS)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  19. VizieR Online Data Catalog: SOFI and ISOCAM observations of Cha II (Persi+, 2003)

    NASA Astrophysics Data System (ADS)

    Persi, P.; Marenzi, A. R.; Gomez, M.; Olofsson, G.

    2003-01-01

    A region of approximately 28'x26' of Cha II, centered at RA = 13h 00min 47s, DE = -77° 06' 09" (2000), was surveyed with ISOCAM in raster mode at LW2(5-8.5μm) (TDT N.11500619) and LW3(12-18μm)(TDT N.11500620). All the frames were observed with a pixel field of view (PFOV) of 6", intrinsic integration time Tint=2.1s and ~15s integration time per sky position. The total integration time was of 4472 s and 4474 s for LW2 and LW3, respectively. We obtained J, H, and Ks images of the central part of Cha II covering an area of4.9'x4.9' with the SOFI near-IR camera at the ESO 3.58m New Technology Telescope (NTT) on the night of April 28, 2000 under very good seeing conditions (~0.3") SOFI uses a 1024x1024 pixel HgCdTe array and provides a field of view of 299"x299" with a scale of 0.292"/pix. (2 data files).

  20. Study on application of dynamic monitoring of land use based on mobile GIS technology

    NASA Astrophysics Data System (ADS)

    Tian, Jingyi; Chu, Jian; Guo, Jianxing; Wang, Lixin

    2006-10-01

    The land use dynamic monitoring is an important mean to maintain the real-time update of the land use data. Mobile GIS technology integrates GIS, GPS and Internet. It can update the historic al data in real time with site-collected data and realize the data update in large scale with high precision. The Monitoring methods on the land use change data with the mobile GIS technology were discussed. Mobile terminal of mobile GIS has self-developed for this study with GPS-25 OEM and notebook computer. The RTD (real-time difference) operation mode is selected. Mobile GIS system of dynamic monitoring of land use have developed with Visual C++ as operation platform, MapObjects control as graphic platform and MSCmm control as communication platform, which realizes organic integration of GPS, GPRS and GIS. This system has such following basic functions as data processing, graphic display, graphic editing, attribute query and navigation. Qinhuangdao city was selected as the experiential area. Shown by the study result, the mobile GIS integration system of dynamic monitoring of land use developed by this study has practical application value.

  1. Some new thoughts about long-term precession formula

    NASA Astrophysics Data System (ADS)

    Vondrák, J.; Capitaine, N.; Wallace, P.

    2011-10-01

    In our preceding study (Vondrák et al. 2009) we formulated developments for the precessional contribution to the CIP X, Y coordinates suitable for use over long time intervals. They were fitted to IAU 2006 close to J2000.0 and to the numerical integration of the ecliptic (using the integrator package Mercury 6) and of the general precession and obliquity (using Laskar's solution LA93) for more distant epochs. Now we define the boundary between precession and nutation (both are periodic) to avoid their overlap. We use the IAU 2006 model (that is based on the Bretagnon's solution VSOP87 and the JPL planetary ephemerides DE406) to represent the precession of the ecliptic close to J2000.0, a new integration using Mercury 6 for more distant epochs, and Laskar's LA93 solution to represent general precession and obliquity. The goal is to obtain new developments for different sets of precession angles that would fit to modern observations near J2000.0, and at the same time to numerical integration of the translatory-rotatory motions of solar system bodies on scales of several thousand centuries.

  2. Qualitative Assessment of the Integration of HIV Services With Infant Routine Immunization Visits in Tanzania

    PubMed Central

    Wallace, Aaron; Kimambo, Sajida; Dafrossa, Lyimo; Rusibamayila, Neema; Rwebembera, Anath; Songoro, Juma; Arthur, Gilly; Luman, Elizabeth; Finkbeiner, Thomas; Goodson, James L.

    2015-01-01

    Background In 2009, a project was implemented in 8 primary health clinics throughout Tanzania to explore the feasibility of integrating pediatric HIV prevention services with routine infant immunization visits. Methods We conducted interviews with 64 conveniently sampled mothers of infants who had received integrated HIV and immunization services and 16 providers who delivered the integrated services to qualitatively identify benefits and challenges of the intervention midway through project implementation. Findings Mothers’ perceived benefits of the integrated services included time savings, opportunity to learn their child's HIV status and receive HIV treatment, if necessary. Providers’ perceived benefits included reaching mothers who usually would not come for only HIV testing. Mothers and providers reported similar challenges, including mothers’ fear of HIV testing, poor spousal support, perceived mandatory HIV testing, poor patient flow affecting confidentiality of service delivery, heavier provider workloads, and community stigma against HIV-infected persons; the latter a more frequent theme in rural compared with urban locations. Interpretation Future scale-up should ensure privacy of these integrated services received at clinics and community outreach to address stigma and perceived mandatory testing. Increasing human resources for health to address higher workloads and longer waiting times for proper patient flow is necessary in the long term. PMID:24326602

  3. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  4. Real-time hydrological early warning system at national scale for surface water and groundwater with stakeholder involvement

    NASA Astrophysics Data System (ADS)

    He, X.; Stisen, S.; Henriksen, H. J.

    2015-12-01

    Hydrological models are important tools to support decision making in water resource management in the past few decades. Nowadays, frequent occurrence of extreme hydrological events has put focus on development of real-time hydrological modeling and forecasting systems. Among the various types of hydrological models, it is only the rainfall-runoff models for surface water that are commonly used in the online real-time fashion; and there is never a tradition to use integrated hydrological models for both surface water and groundwater with large scale perspective. At the Geological Survey of Denmark and Greenland (GEUS), we have setup and calibrated an integrated hydrological model that covers the entire nation, namely the DK-model. So far, the DK-model has only been used in offline mode for historical and future scenario simulations. Therefore, challenges arise when operating the DK-model in real-time mode due to lack of technical experiences and stakeholder awareness. In the present study, we try to demonstrate the process of bringing the DK-model online while actively involving the opinions of the stakeholders. Although the system is not yet fully operational, a prototype has been finished and presented to the stakeholders which can simulate groundwater levels, streamflow and water content in the root zone with a lead time of 48 hours and refreshed every 6 hours. The active involvement of stakeholders has provided very valuable insights and feedbacks for future improvements.

  5. Future trends in transport and fate of diffuse contaminants in catchments, with special emphasis on stable isotope applications

    USGS Publications Warehouse

    Turner, J.; Albrechtsen, H.-J.; Bonell, M.; Duguet, J.-P.; Harris, B.; Meckenstock, R.; McGuire, K.; Moussa, R.; Peters, N.; Richnow, H.H.; Sherwood-Lollar, B.; Uhlenbrook, S.; van, Lanen H.

    2006-01-01

    A summary is provided of the first of a series of proposed Integrated Science Initiative workshops supported by the UNESCO International Hydrological Programme. The workshop brought together hydrologists, environmental chemists, microbiologists, stable isotope specialists and natural resource managers with the purpose of communicating new ideas on ways to assess microbial degradation processes and reactive transport at catchment scales. The focus was on diffuse contamination at catchment scales and the application of compound-specific isotope analysis (CSIA) in the assessment of biological degradation processes of agrochemicals. Major outcomes were identifying the linkage between water residence time distribution and rates of contaminant degradation, identifying the need for better information on compound specific microbial degradation isotope fractionation factors and the potential of CSIA in identifying key degradative processes. In the natural resource management context, a framework was developed where CSIA techniques were identified as practically unique in their capacity to serve as distributed integrating indicators of process across a range of scales (micro to diffuse) of relevance to the problem of diffuse pollution assessment. Copyright ?? 2006 John Wiley & Sons, Ltd.

  6. Monitoring Strategies for REDD+: Integrating Field, Airborne, and Satellite Observations of Amazon Forests

    NASA Technical Reports Server (NTRS)

    Morton, Douglas; Souza, Carlos, Jr.; Souza, Carlos, Jr.; Keller, Michael

    2012-01-01

    Large-scale tropical forest monitoring efforts in support of REDD+ (Reducing Emissions from Deforestation and forest Degradation plus enhancing forest carbon stocks) confront a range of challenges. REDD+ activities typically have short reporting time scales, diverse data needs, and low tolerance for uncertainties. Meeting these challenges will require innovative use of remote sensing data, including integrating data at different spatial and temporal resolutions. The global scientific community is engaged in developing, evaluating, and applying new methods for regional to global scale forest monitoring. Pilot REDD+ activities are underway across the tropics with support from a range of national and international groups, including SilvaCarbon, an interagency effort to coordinate US expertise on forest monitoring and resource management. Early actions on REDD+ have exposed some of the inherent tradeoffs that arise from the use of incomplete or inaccurate data to quantify forest area changes and related carbon emissions. Here, we summarize recent advances in forest monitoring to identify and target the main sources of uncertainty in estimates of forest area changes, aboveground carbon stocks, and Amazon forest carbon emissions.

  7. Visualization and Analysis of Multi-scale Land Surface Products via Giovanni Portals

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Kempler, Steven J.; Gerasimov, Irina V.

    2013-01-01

    Large volumes of MODIS land data products at multiple spatial resolutions have been integrated into the Giovanni online analysis system to support studies on land cover and land use changes,focused on the Northern Eurasia and Monsoon Asia regions through the LCLUC program. Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), providing a simple and intuitive way to visualize, analyze, and access Earth science remotely-sensed and modeled data.Customized Giovanni Web portals (Giovanni-NEESPI andGiovanni-MAIRS) have been created to integrate land, atmospheric,cryospheric, and societal products, enabling researchers to do quick exploration and basic analyses of land surface changes, and their relationships to climate, at global and regional scales. This presentation shows a sample Giovanni portal page, lists selected data products in the system, and illustrates potential analyses with imagesand time-series at global and regional scales, focusing on climatology and anomaly analysis. More information is available at the GES DISCMAIRS data support project portal: http:disc.sci.gsfc.nasa.govmairs.

  8. Are X-rays the key to integrated computational materials engineering?

    DOE PAGES

    Ice, Gene E.

    2015-11-01

    The ultimate dream of materials science is to predict materials behavior from composition and processing history. Owing to the growing power of computers, this long-time dream has recently found expression through worldwide excitement in a number of computation-based thrusts: integrated computational materials engineering, materials by design, computational materials design, three-dimensional materials physics and mesoscale physics. However, real materials have important crystallographic structures at multiple length scales, which evolve during processing and in service. Moreover, real materials properties can depend on the extreme tails in their structural and chemical distributions. This makes it critical to map structural distributions with sufficient resolutionmore » to resolve small structures and with sufficient statistics to capture the tails of distributions. For two-dimensional materials, there are high-resolution nondestructive probes of surface and near-surface structures with atomic or near-atomic resolution that can provide detailed structural, chemical and functional distributions over important length scales. Furthermore, there are no nondestructive three-dimensional probes with atomic resolution over the multiple length scales needed to understand most materials.« less

  9. Laboratory-Scale Simulation and Real-Time Tracking of a Microbial Contamination Event and Subsequent Shock-Chlorination in Drinking Water

    PubMed Central

    Besmer, Michael D.; Sigrist, Jürg A.; Props, Ruben; Buysschaert, Benjamin; Mao, Guannan; Boon, Nico; Hammes, Frederik

    2017-01-01

    Rapid contamination of drinking water in distribution and storage systems can occur due to pressure drop, backflow, cross-connections, accidents, and bio-terrorism. Small volumes of a concentrated contaminant (e.g., wastewater) can contaminate large volumes of water in a very short time with potentially severe negative health impacts. The technical limitations of conventional, cultivation-based microbial detection methods neither allow for timely detection of such contaminations, nor for the real-time monitoring of subsequent emergency remediation measures (e.g., shock-chlorination). Here we applied a newly developed continuous, ultra high-frequency flow cytometry approach to track a rapid pollution event and subsequent disinfection of drinking water in an 80-min laboratory scale simulation. We quantified total (TCC) and intact (ICC) cell concentrations as well as flow cytometric fingerprints in parallel in real-time with two different staining methods. The ingress of wastewater was detectable almost immediately (i.e., after 0.6% volume change), significantly changing TCC, ICC, and the flow cytometric fingerprint. Shock chlorination was rapid and detected in real time, causing membrane damage in the vast majority of bacteria (i.e., drop of ICC from more than 380 cells μl-1 to less than 30 cells μl-1 within 4 min). Both of these effects as well as the final wash-in of fresh tap water followed calculated predictions well. Detailed and highly quantitative tracking of microbial dynamics at very short time scales and for different characteristics (e.g., concentration, membrane integrity) is feasible. This opens up multiple possibilities for targeted investigation of a myriad of bacterial short-term dynamics (e.g., disinfection, growth, detachment, operational changes) both in laboratory-scale research and full-scale system investigations in practice. PMID:29085343

  10. Testing the limits of Paleozoic chronostratigraphic correlation via high-resolution (13Ccarb) biochemostratigraphy across the Llandovery–Wenlock (Silurian) boundary: Is a unified Phanerozoic time scale achievable?

    USGS Publications Warehouse

    Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.

    2010-01-01

    The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.

  11. Scalable Manufacturing of Solderable and Stretchable Physiologic Sensing Systems.

    PubMed

    Kim, Yun-Soung; Lu, Jesse; Shih, Benjamin; Gharibans, Armen; Zou, Zhanan; Matsuno, Kristen; Aguilera, Roman; Han, Yoonjae; Meek, Ann; Xiao, Jianliang; Tolley, Michael T; Coleman, Todd P

    2017-10-01

    Methods for microfabrication of solderable and stretchable sensing systems (S4s) and a scaled production of adhesive-integrated active S4s for health monitoring are presented. S4s' excellent solderability is achieved by the sputter-deposited nickel-vanadium and gold pad metal layers and copper interconnection. The donor substrate, which is modified with "PI islands" to become selectively adhesive for the S4s, allows the heterogeneous devices to be integrated with large-area adhesives for packaging. The feasibility for S4-based health monitoring is demonstrated by developing an S4 integrated with a strain gauge and an onboard optical indication circuit. Owing to S4s' compatibility with the standard printed circuit board assembly processes, a variety of commercially available surface mount chip components, such as the wafer level chip scale packages, chip resistors, and light-emitting diodes, can be reflow-soldered onto S4s without modifications, demonstrating the versatile and modular nature of S4s. Tegaderm-integrated S4 respiration sensors are tested for robustness for cyclic deformation, maximum stretchability, durability, and biocompatibility for multiday wear time. The results of the tests and demonstration of the respiration sensing indicate that the adhesive-integrated S4s can provide end users a way for unobtrusive health monitoring. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. SeismoGeodesy: Combination of High Rate, Real-time GNSS and Accelerometer Observations and Rapid Seismic Event Notification for Earth Quake Early Warning and Volcano Monitoring

    NASA Astrophysics Data System (ADS)

    Jackson, Michael; Zimakov, Leonid; Moessmer, Matthias

    2015-04-01

    Scientific GNSS networks are moving towards a model of real-time data acquisition, epoch-by-epoch storage integrity, and on-board real-time position and displacement calculations. This new paradigm allows the integration of real-time, high-rate GNSS displacement information with acceleration and velocity data to create very high-rate displacement records. The mating of these two instruments allows the creation of a new, very high-rate (200 Hz) displacement observable that has the full-scale displacement characteristics of GNSS and high-precision dynamic motions of seismic technologies. It is envisioned that these new observables can be used for earthquake early warning studies, volcano monitoring, and critical infrastructure monitoring applications. Our presentation will focus on the characteristics of GNSS, seismic, and strong motion sensors in high dynamic environments, including historic earthquakes replicated on a shake table over a range of displacements and frequencies. We will explore the optimum integration of these sensors from a filtering perspective including simple harmonic impulses over varying frequencies and amplitudes and under the dynamic conditions of various earthquake scenarios. We will also explore the tradeoffs between various GNSS processing schemes including real-time precise point positioning (PPP) and real-time kinematic (RTK) as applied to seismogeodesy. In addition we will discuss implementation of a Rapid Seismic Event Notification System that provides quick delivery of digital data from seismic stations to the acquisition and processing center and a full data integrity model for real-time earthquake notification that provides warning prior to significant ground shaking.

  13. Unfolding an electronic integrate-and-fire circuit.

    PubMed

    Carrillo, Humberto; Hoppensteadt, Frank

    2010-01-01

    Many physical and biological phenomena involve accumulation and discharge processes that can occur on significantly different time scales. Models of these processes have contributed to understand excitability self-sustained oscillations and synchronization in arrays of oscillators. Integrate-and-fire (I+F) models are popular minimal fill-and-flush mathematical models. They are used in neuroscience to study spiking and phase locking in single neuron membranes, large scale neural networks, and in a variety of applications in physics and electrical engineering. We show here how the classical first-order I+F model fits into the theory of nonlinear oscillators of van der Pol type by demonstrating that a particular second-order oscillator having small parameters converges in a singular perturbation limit to the I+F model. In this sense, our study provides a novel unfolding of such models and it identifies a constructible electronic circuit that is closely related to I+F.

  14. Circuit engineering principles for construction of bipolar large-scale integrated circuit storage devices and very large-scale main memory

    NASA Astrophysics Data System (ADS)

    Neklyudov, A. A.; Savenkov, V. N.; Sergeyez, A. G.

    1984-06-01

    Memories are improved by increasing speed or the memory volume on a single chip. The most effective means for increasing speeds in bipolar memories are current control circuits with the lowest extraction times for a specific power consumption (1/4 pJ/bit). The control current circuitry involves multistage current switches and circuits accelerating transient processes in storage elements and links. Circuit principles for the design of bipolar memories with maximum speeds for an assigned minimum of circuit topology are analyzed. Two main classes of storage with current control are considered: the ECL type and super-integrated injection type storage with data capacities of N = 1/4 and N 4/16, respectively. The circuits reduce logic voltage differentials and the volumes of lexical and discharge buses and control circuit buses. The limiting speed is determined by the antiinterference requirements of the memory in storage and extraction modes.

  15. Ultrafast optical ranging using microresonator soliton frequency combs

    NASA Astrophysics Data System (ADS)

    Trocha, P.; Karpov, M.; Ganin, D.; Pfeiffer, M. H. P.; Kordts, A.; Wolf, S.; Krockenberger, J.; Marin-Palomo, P.; Weimann, C.; Randel, S.; Freude, W.; Kippenberg, T. J.; Koos, C.

    2018-02-01

    Light detection and ranging is widely used in science and industry. Over the past decade, optical frequency combs were shown to offer advantages in optical ranging, enabling fast distance acquisition with high accuracy. Driven by emerging high-volume applications such as industrial sensing, drone navigation, or autonomous driving, there is now a growing demand for compact ranging systems. Here, we show that soliton Kerr comb generation in integrated silicon nitride microresonators provides a route to high-performance chip-scale ranging systems. We demonstrate dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for in-flight sampling of gun projectiles moving at 150 meters per second. Combining integrated soliton-comb ranging systems with chip-scale nanophotonic phased arrays could enable compact ultrafast ranging systems for emerging mass applications.

  16. An Integrative, Multi-Scale Computational Model of a Swimming Lamprey Fully Coupled to Its Fluid Environment and Incorporating Proprioceptive Feedback

    NASA Astrophysics Data System (ADS)

    Hamlet, C. L.; Hoffman, K.; Fauci, L.; Tytell, E.

    2016-02-01

    The lamprey is a model organism for both neurophysiology and locomotion studies. To study the role of sensory feedback as an organism moves through its environment, a 2D, integrative, multi-scale model of an anguilliform swimmer driven by neural activation from a central pattern generator (CPG) is constructed. The CPG in turn drives muscle kinematics and is fully coupled to the surrounding fluid. The system is numerically evolved in time using an immersed boundary framework producing an emergent swimming mode. Proprioceptive feedback to the CPG based on experimental observations adjust the activation signal as the organism interacts with its environment. Effects on the speed, stability and cost (metabolic work) of swimming due to nonlinear dependencies associated with muscle force development combined with proprioceptive feedback to neural activation are estimated and examined.

  17. Unidirectional waveguide grating antennas with uniform emission for optical phased arrays.

    PubMed

    Raval, Manan; Poulton, Christopher V; Watts, Michael R

    2017-07-01

    We demonstrate millimeter-scale optical waveguide grating antennas with unidirectional emission for integrated optical phased arrays. Unidirectional emission eliminates the fundamental problem of blind spots in the element factor of a phased array caused by reflections of antenna radiation within the substrate. Over 90% directionality is demonstrated using a design consisting of two silicon nitride layers. Furthermore, the perturbation strength along the antenna is apodized to achieve uniform emission for the first time, to the best of our knowledge, on a millimeter scale. This allows for a high effective aperture and receiving efficiency. The emission profile of the measured 3 mm long antenna has a standard deviation of 8.65% of the mean. These antennas are state of the art and will allow for integrated optical phased arrays with blind-spot-free high transmission output power and high receiving efficiency for LIDAR and free-space communication systems.

  18. [Development of the theoretical framework and the item pool of the peri-operative recovery scale for integrative medicine].

    PubMed

    Su, Bi-ying; Liu, Shao-nan; Li, Xiao-yan

    2011-11-01

    To study the train of thoughts and procedures for developing the theoretical framework and the item pool of the peri-operative recovery scale for integrative medicine, thus making preparation for the development of this scale and psychometric testing. Under the guidance for Chinese medicine theories and the guidance for developing psychometric scale, the theoretical framework and the item pool of the scale were initially laid out by literature retrieval, and expert consultation, etc. The scale covered the domains of physical function, mental function, activity function, pain, and general assessment. Besides, social function is involved, which is suitable for pre-operative testing and long-term therapeutic efficacy testing after discharge from hospital. Each domain should cover correlated Zang-Fu organs, qi, blood, and the patient-reported outcomes. Totally 122 items were initially covered in the item pool according to theoretical framework of the scale. The peri-operative recovery scale of integrative medicine was the embodiment of the combination of Chinese medicine theories and patient-reported outcome concepts. The scale could reasonably assess the peri-operative recovery outcomes of patients treated by integrative medicine.

  19. Diffusion in random networks

    DOE PAGES

    Zhang, Duan Z.; Padrino, Juan C.

    2017-06-01

    The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less

  20. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    DOE PAGES

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-02

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. However, how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer 'how far is far enough,' we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25–2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation lengthmore » $$\\xi $$ that falls at least as fast as $${{\\tau }^{-1}}$$ . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.« less

  1. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    NASA Astrophysics Data System (ADS)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation length ξ that falls at least as fast as {{τ }-1} . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.

  2. MHD Modeling of the Solar Wind with Turbulence Transport and Heating

    NASA Technical Reports Server (NTRS)

    Goldstein, M. L.; Usmanov, A. V.; Matthaeus, W. H.; Breech, B.

    2009-01-01

    We have developed a magnetohydrodynamic model that describes the global axisymmetric steady-state structure of the solar wind near solar minimum with account for transport of small-scale turbulence associated heating. The Reynolds-averaged mass, momentum, induction, and energy equations for the large-scale solar wind flow are solved simultaneously with the turbulence transport equations in the region from 0.3 to 100 AU. The large-scale equations include subgrid-scale terms due to turbulence and the turbulence (small-scale) equations describe the effects of transport and (phenomenologically) dissipation of the MHD turbulence based on a few statistical parameters (turbulence energy, normalized cross-helicity, and correlation scale). The coupled set of equations is integrated numerically for a source dipole field on the Sun by a time-relaxation method in the corotating frame of reference. We present results on the plasma, magnetic field, and turbulence distributions throughout the heliosphere and on the role of the turbulence in the large-scale structure and temperature distribution in the solar wind.

  3. Integrated optic head for sensing a two-dimensional displacement of a grating scale

    NASA Astrophysics Data System (ADS)

    Ura, Shogo; Endoh, Toshiaki; Suhara, Toshiaki; Nishihara, Hiroshi

    1996-11-01

    An integrated optic sensor head was proposed for sensing a two-dimensional displacement of a scale consisting of crossed gratings. Two interferometers, crossing each other, are constructed by the integration of two pairs of linearly focusing grating couplers (LFGC's) and two pairs of photodiodes (PD's) on a Si substrate. Four beams radiated by the LFGC's from the sensor head overlap on the grating scale, and the beams are diffracted by the grating scale and interfere on the PD's. The period of the interference signal variation is just half of the scale grating period. The device was designed and fabricated with a grating scale of 3.2- mu m period, and the sensing principle was experimentally confirmed.

  4. Entanglement and thermodynamics after a quantum quench in integrable systems.

    PubMed

    Alba, Vincenzo; Calabrese, Pasquale

    2017-07-25

    Entanglement and entropy are key concepts standing at the foundations of quantum and statistical mechanics. Recently, the study of quantum quenches revealed that these concepts are intricately intertwined. Although the unitary time evolution ensuing from a pure state maintains the system at zero entropy, local properties at long times are captured by a statistical ensemble with nonzero thermodynamic entropy, which is the entanglement accumulated during the dynamics. Therefore, understanding the entanglement evolution unveils how thermodynamics emerges in isolated systems. Alas, an exact computation of the entanglement dynamics was available so far only for noninteracting systems, whereas it was deemed unfeasible for interacting ones. Here, we show that the standard quasiparticle picture of the entanglement evolution, complemented with integrability-based knowledge of the steady state and its excitations, leads to a complete understanding of the entanglement dynamics in the space-time scaling limit. We thoroughly check our result for the paradigmatic Heisenberg chain.

  5. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  6. Entanglement and thermodynamics after a quantum quench in integrable systems

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo; Calabrese, Pasquale

    2017-07-01

    Entanglement and entropy are key concepts standing at the foundations of quantum and statistical mechanics. Recently, the study of quantum quenches revealed that these concepts are intricately intertwined. Although the unitary time evolution ensuing from a pure state maintains the system at zero entropy, local properties at long times are captured by a statistical ensemble with nonzero thermodynamic entropy, which is the entanglement accumulated during the dynamics. Therefore, understanding the entanglement evolution unveils how thermodynamics emerges in isolated systems. Alas, an exact computation of the entanglement dynamics was available so far only for noninteracting systems, whereas it was deemed unfeasible for interacting ones. Here, we show that the standard quasiparticle picture of the entanglement evolution, complemented with integrability-based knowledge of the steady state and its excitations, leads to a complete understanding of the entanglement dynamics in the space-time scaling limit. We thoroughly check our result for the paradigmatic Heisenberg chain.

  7. Entanglement and thermodynamics after a quantum quench in integrable systems

    PubMed Central

    Alba, Vincenzo; Calabrese, Pasquale

    2017-01-01

    Entanglement and entropy are key concepts standing at the foundations of quantum and statistical mechanics. Recently, the study of quantum quenches revealed that these concepts are intricately intertwined. Although the unitary time evolution ensuing from a pure state maintains the system at zero entropy, local properties at long times are captured by a statistical ensemble with nonzero thermodynamic entropy, which is the entanglement accumulated during the dynamics. Therefore, understanding the entanglement evolution unveils how thermodynamics emerges in isolated systems. Alas, an exact computation of the entanglement dynamics was available so far only for noninteracting systems, whereas it was deemed unfeasible for interacting ones. Here, we show that the standard quasiparticle picture of the entanglement evolution, complemented with integrability-based knowledge of the steady state and its excitations, leads to a complete understanding of the entanglement dynamics in the space–time scaling limit. We thoroughly check our result for the paradigmatic Heisenberg chain. PMID:28698379

  8. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  9. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  10. Time and timing in the acoustic recognition system of crickets

    PubMed Central

    Hennig, R. Matthias; Heller, Klaus-Gerhard; Clemens, Jan

    2014-01-01

    The songs of many insects exhibit precise timing as the result of repetitive and stereotyped subunits on several time scales. As these signals encode the identity of a species, time and timing are important for the recognition system that analyzes these signals. Crickets are a prominent example as their songs are built from sound pulses that are broadcast in a long trill or as a chirped song. This pattern appears to be analyzed on two timescales, short and long. Recent evidence suggests that song recognition in crickets relies on two computations with respect to time; a short linear-nonlinear (LN) model that operates as a filter for pulse rate and a longer integration time window for monitoring song energy over time. Therefore, there is a twofold role for timing. A filter for pulse rate shows differentiating properties for which the specific timing of excitation and inhibition is important. For an integrator, however, the duration of the time window is more important than the precise timing of events. Here, we first review evidence for the role of LN-models and integration time windows for song recognition in crickets. We then parameterize the filter part by Gabor functions and explore the effects of duration, frequency, phase, and offset as these will correspond to differently timed patterns of excitation and inhibition. These filter properties were compared with known preference functions of crickets and katydids. In a comparative approach, the power for song discrimination by LN-models was tested with the songs of over 100 cricket species. It is demonstrated how the acoustic signals of crickets occupy a simple 2-dimensional space for song recognition that arises from timing, described by a Gabor function, and time, the integration window. Finally, we discuss the evolution of recognition systems in insects based on simple sensory computations. PMID:25161622

  11. PPAR-gamma agonist pioglitazone modifies craving intensity and brain white matter integrity in patients with primary cocaine use disorder: a double-blind randomized controlled pilot trial.

    PubMed

    Schmitz, Joy M; Green, Charles E; Hasan, Khader M; Vincent, Jessica; Suchting, Robert; Weaver, Michael F; Moeller, F Gerard; Narayana, Ponnada A; Cunningham, Kathryn A; Dineley, Kelly T; Lane, Scott D

    2017-10-01

    Pioglitazone (PIO), a potent agonist of PPAR-gamma, is a promising candidate treatment for cocaine use disorder (CUD). We tested the effects of PIO on targeted mechanisms relevant to CUD: cocaine craving and brain white matter (WM) integrity. Feasibility, medication compliance and tolerability were evaluated. Two-arm double-blind randomized controlled proof-of-concept pilot trial of PIO or placebo (PLC). Single-site out-patient treatment research clinic in Houston, TX, USA. Thirty treatment-seeking adults, 18 to 60 years old, with CUD. Eighteen participants (8 = PIO; 10 = PLC) completed diffusion tensor imaging (DTI) of WM integrity at pre-/post-treatment. Study medication was dispensed at thrice weekly visits along with once-weekly cognitive behavioral therapy for 12 weeks. Measures of target engagement mechanisms of interest included cocaine craving assessed by the Brief Substance Craving Scale (BSCS), the Obsessive Compulsive Drug Use Scale (OCDUS), a visual analog scale (VAS) and change in WM integrity. Feasibility measures included number completing treatment, medication compliance (riboflavin detection) and tolerability (side effects, serious adverse events). Target engagement change in mechanisms of interest, defined as a ≥ 0.75 Bayesian posterior probability of an interaction existing favoring PIO over PLC, was demonstrated on measures of craving (BSCS, VAS) and WM integrity indexed by fractional anisotropy (FA) values. Outcomes indicated greater decrease in craving and greater increase in FA values in the PIO group. Feasibility was demonstrated by high completion rates among those starting treatment (21/26 = 80%) and medication compliance (≥ 80%). There were no reported serious adverse events for PIO. Compared with placebo, patients receiving pioglitazone show a higher likelihood of reduced cocaine craving and improved brain white matter integrity as a function of time in treatment. Pioglitazone shows good feasibility as a treatment for cocaine use disorder. © 2017 Society for the Study of Addiction.

  12. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  13. Self-aligned blocking integration demonstration for critical sub-40nm pitch Mx level patterning

    NASA Astrophysics Data System (ADS)

    Raley, Angélique; Mohanty, Nihar; Sun, Xinghua; Farrell, Richard A.; Smith, Jeffrey T.; Ko, Akiteru; Metz, Andrew W.; Biolsi, Peter; Devilliers, Anton

    2017-04-01

    Multipatterning has enabled continued scaling of chip technology at the 28nm node and beyond. Selfaligned double patterning (SADP) and self-aligned quadruple patterning (SAQP) as well as Litho- Etch/Litho-Etch (LELE) iterations are widely used in the semiconductor industry to enable patterning at sub 193 immersion lithography resolutions for layers such as FIN, Gate and critical Metal lines. Multipatterning requires the use of multiple masks which is costly and increases process complexity as well as edge placement error variation driven mostly by overlay. To mitigate the strict overlay requirements for advanced technology nodes (7nm and below), a self-aligned blocking integration is desirable. This integration trades off the overlay requirement for an etch selectivity requirement and enables the cut mask overlay tolerance to be relaxed from half pitch to three times half pitch. Selfalignement has become the latest trend to enable scaling and self-aligned integrations are being pursued and investigated for various critical layers such as contact, via, metal patterning. In this paper we propose and demonstrate a low cost flexible self-aligned blocking strategy for critical metal layer patterning for 7nm and beyond from mask assembly to low -K dielectric etch. The integration is based on a 40nm pitch SADP flow with 2 cut masks compatible with either cut or block integration and employs dielectric films widely used in the back end of the line. As a consequence this approach is compatible with traditional etch, deposition and cleans tools that are optimized for dielectric etches. We will review the critical steps and selectivities required to enable this integration along with bench-marking of each integration option (cut vs. block).

  14. Combination of High Rate, Real-Time GNSS and Accelerometer Observations and Rapid Seismic Event Notification for Earthquake Early Warning and Volcano Monitoring with a Focus on the Pacific Rim.

    NASA Astrophysics Data System (ADS)

    Zimakov, L. G.; Passmore, P.; Raczka, J.; Alvarez, M.; Jackson, M.

    2014-12-01

    Scientific GNSS networks are moving towards a model of real-time data acquisition, epoch-by-epoch storage integrity, and on-board real-time position and displacement calculations. This new paradigm allows the integration of real-time, high-rate GNSS displacement information with acceleration and velocity data to create very high-rate displacement records. The mating of these two instruments allows the creation of a new, very high-rate (200 sps) displacement observable that has the full-scale displacement characteristics of GNSS and high-precision dynamic motions of seismic technologies. It is envisioned that these new observables can be used for earthquake early warning studies, volcano monitoring, and critical infrastructure monitoring applications. Our presentation will focus on the characteristics of GNSS, seismic, and strong motion sensors in high dynamic environments, including historic earthquakes in Southern California and the Pacific Rim, replicated on a shake table, over a range of displacements and frequencies. We will explore the optimum integration of these sensors from a filtering perspective including simple harmonic impulses over varying frequencies and amplitudes and under the dynamic conditions of various earthquake scenarios. In addition we will discuss implementation of a Rapid Seismic Event Notification System that provides quick delivery of digital data from seismic stations to the acquisition and processing center and a full data integrity model for real-time earthquake notification that provides warning prior to significant ground shaking.

  15. The Need for Integrated Approaches in Metabolic Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    This review highlights state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. We emphasize that a combination of different approaches over multiple time and size scales must b e considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the "system" that is being manipulated: transcriptome, translatome, proteome, or reactome. By bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  16. Advanced Fiber-optic Monitoring System for Space-flight Applications

    NASA Technical Reports Server (NTRS)

    Hull, M. S.; VanTassell, R. L.; Pennington, C. D.; Roman, M.

    2005-01-01

    Researchers at Luna Innovations Inc. and the National Aeronautic and Space Administration s Marshall Space Flight Center (NASA MSFC) have developed an integrated fiber-optic sensor system for real-time monitoring of chemical contaminants and whole-cell bacterial pathogens in water. The system integrates interferometric and evanescent-wave optical fiber-based sensing methodologies with atomic force microscopy (AFM) and long-period grating (LPG) technology to provide versatile measurement capability for both micro- and nano-scale analytes. Sensors can be multiplexed in an array format and embedded in a totally self-contained laboratory card for use with an automated microfluidics platform.

  17. The Need for Integrated Approaches in Metabolic Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  18. The Need for Integrated Approaches in Metabolic Engineering

    DOE PAGES

    Lechner, Anna; Brunk, Elizabeth; Keasling, Jay D.

    2016-08-15

    Highlights include state-of-the-art procedures for heterologous small-molecule biosynthesis, the associated bottlenecks, and new strategies that have the potential to accelerate future accomplishments in metabolic engineering. A combination of different approaches over multiple time and size scales must be considered for successful pathway engineering in a heterologous host. We have classified these optimization procedures based on the “system” that is being manipulated: transcriptome, translatome, proteome, or reactome. Here, by bridging multiple disciplines, including molecular biology, biochemistry, biophysics, and computational sciences, we can create an integral framework for the discovery and implementation of novel biosynthetic production routes.

  19. Quantum Quenches and Relaxation Dynamics in the Thermodynamic Limit

    NASA Astrophysics Data System (ADS)

    Mallayya, Krishnanand; Rigol, Marcos

    2018-02-01

    We implement numerical linked cluster expansions (NLCEs) to study dynamics of lattice systems following quantum quenches, and focus on a hard-core boson model in one-dimensional lattices. We find that, in the nonintegrable regime and within the accessible times, local observables exhibit exponential relaxation. We determine the relaxation rate as one departs from the integrable point and show that it scales quadratically with the strength of the integrability breaking perturbation. We compare the NLCE results with those from exact diagonalization calculations on finite chains with periodic boundary conditions, and show that NLCEs are far more accurate.

  20. Transcriptomics as a tool for assessing the scalability of mammalian cell perfusion systems.

    PubMed

    Jayapal, Karthik P; Goudar, Chetan T

    2014-01-01

    DNA microarray-based transcriptomics have been used to determine the time course of laboratory and manufacturing-scale perfusion bioreactors in an attempt to characterize cell physiological state at these two bioreactor scales. Given the limited availability of genomic data for baby hamster kidney (BHK) cells, a Chinese hamster ovary (CHO)-based microarray was used following a feasibility assessment of cross-species hybridization. A heat shock experiment was performed using both BHK and CHO cells and resulting DNA microarray data were analyzed using a filtering criteria of perfect match (PM)/single base mismatch (MM) > 1.5 and PM-MM > 50 to exclude probes with low specificity or sensitivity for cross-species hybridizations. For BHK cells, 8910 probe sets (39 %) passed the cutoff criteria, whereas 12,961 probe sets (56 %) passed the cutoff criteria for CHO cells. Yet, the data from BHK cells allowed distinct clustering of heat shock and control samples as well as identification of biologically relevant genes as being differentially expressed, indicating the utility of cross-species hybridization. Subsequently, DNA microarray analysis was performed on time course samples from laboratory- and manufacturing-scale perfusion bioreactors that were operated under the same conditions. A majority of the variability (37 %) was associated with the first principal component (PC-1). Although PC-1 changed monotonically with culture duration, the trends were very similar in both the laboratory and manufacturing-scale bioreactors. Therefore, despite time-related changes to the cell physiological state, transcriptomic fingerprints were similar across the two bioreactor scales at any given instance in culture. Multiple genes were identified with time-course expression profiles that were very highly correlated (> 0.9) with bioprocess variables of interest. Although the current incomplete annotation limits the biological interpretation of these observations, their full potential may be realized in due course when richer genomic data become available. By taking a pragmatic approach of transcriptome fingerprinting, we have demonstrated the utility of systems biology to support the comparability of laboratory and manufacturing-scale perfusion systems. Scale-down model qualification is the first step in process characterization and hence is an integral component of robust regulatory filings. Augmenting the current paradigm, which relies primarily on cell culture and product quality information, with gene expression data can help make a substantially stronger case for similarity. With continued advances in systems biology approaches, we expect them to be seamlessly integrated into bioprocess development, which can translate into more robust and high yielding processes that can ultimately reduce cost of care for patients.

  1. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less

  2. Photonic crystal ring resonator based optical filters for photonic integrated circuits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, S., E-mail: mail2robinson@gmail.com

    In this paper, a two Dimensional (2D) Photonic Crystal Ring Resonator (PCRR) based optical Filters namely Add Drop Filter, Bandpass Filter, and Bandstop Filter are designed for Photonic Integrated Circuits (PICs). The normalized output response of the filters is obtained using 2D Finite Difference Time Domain (FDTD) method and the band diagram of periodic and non-periodic structure is attained by Plane Wave Expansion (PWE) method. The size of the device is minimized from a scale of few tens of millimeters to the order of micrometers. The overall size of the filters is around 11.4 μm × 11.4 μm which ismore » highly suitable of photonic integrated circuits.« less

  3. Fabrication of nano-scale Cu bond pads with seal design in 3D integration applications.

    PubMed

    Chen, K N; Tsang, C K; Wu, W W; Lee, S H; Lu, J Q

    2011-04-01

    A method to fabricate nano-scale Cu bond pads for improving bonding quality in 3D integration applications is reported. The effect of Cu bonding quality on inter-level via structural reliability for 3D integration applications is investigated. We developed a Cu nano-scale-height bond pad structure and fabrication process for improved bonding quality by recessing oxides using a combination of SiO2 CMP process and dilute HF wet etching. In addition, in order to achieve improved wafer-level bonding, we introduced a seal design concept that prevents corrosion and provides extra mechanical support. Demonstrations of these concepts and processes provide the feasibility of reliable nano-scale 3D integration applications.

  4. Integrating continental-scale ecological data into university courses: Developing NEON's Online Learning Portal

    NASA Astrophysics Data System (ADS)

    Wasser, L. A.; Gram, W.; Lunch, C. K.; Petroy, S. B.; Elmendorf, S.

    2013-12-01

    'Big Data' are becoming increasingly common in many fields. The National Ecological Observatory Network (NEON) will be collecting data over the 30 years, using consistent, standardized methods across the United States. Similar efforts are underway in other parts of the globe (e.g. Australia's Terrestrial Ecosystem Research Network, TERN). These freely available new data provide an opportunity for increased understanding of continental- and global scale processes such as changes in vegetation structure and condition, biodiversity and landuse. However, while 'big data' are becoming more accessible and available, integrating big data into the university courses is challenging. New and potentially unfamiliar data types and associated processing methods, required to work with a growing diversity of available data, may warrant time and resources that present a barrier to classroom integration. Analysis of these big datasets may further present a challenge given large file sizes, and uncertainty regarding best methods to properly statistically summarize and analyze results. Finally, teaching resources, in the form of demonstrative illustrations, and other supporting media that might help teach key data concepts, take time to find and more time to develop. Available resources are often spread widely across multi-online spaces. This presentation will overview the development of NEON's collaborative University-focused online education portal. Portal content will include 1) interactive, online multi-media content that explains key concepts related to NEON's data products including collection methods, key metadata to consider and consideration of potential error and uncertainty surrounding data analysis; and 2) packaged 'lab' activities that include supporting data to be used in an ecology, biology or earth science classroom. To facilitate broad use in classrooms, lab activities will take advantage of freely and commonly available processing tools, techniques and scripts. All NEON materials are being developed in collaboration with labs and organizations across the globe. Integrating data analysis and processing techniques, early in student's careers will support and facilitate student advancement in the sciences - contributing to a larger body of knowledge and understanding of continental and global scale issues. Facilitating understanding of data use and empowering young ecologists with the tools required to process the data, is thus as integral to the observatory as the data itself. In this presentation, we discuss the integral role of freely available education materials that demonstrate the use of big data to address ecological questions and concepts. We also review gaps in existing educational resources related to big data and associated tools. Further, we address the great potential for big data inclusion into both an existing ecological, physical and environmental science courses and self-paced learning model through engaging and interactive multi-media presentation. Finally, we present beta-versions of the interactive, multi-media modules and results from feedback following early piloting and review.

  5. Integrated omics for the identification of key functionalities in biological wastewater treatment microbial communities.

    PubMed

    Narayanasamy, Shaman; Muller, Emilie E L; Sheik, Abdul R; Wilmes, Paul

    2015-05-01

    Biological wastewater treatment plants harbour diverse and complex microbial communities which prominently serve as models for microbial ecology and mixed culture biotechnological processes. Integrated omic analyses (combined metagenomics, metatranscriptomics, metaproteomics and metabolomics) are currently gaining momentum towards providing enhanced understanding of community structure, function and dynamics in situ as well as offering the potential to discover novel biological functionalities within the framework of Eco-Systems Biology. The integration of information from genome to metabolome allows the establishment of associations between genetic potential and final phenotype, a feature not realizable by only considering single 'omes'. Therefore, in our opinion, integrated omics will become the future standard for large-scale characterization of microbial consortia including those underpinning biological wastewater treatment processes. Systematically obtained time and space-resolved omic datasets will allow deconvolution of structure-function relationships by identifying key members and functions. Such knowledge will form the foundation for discovering novel genes on a much larger scale compared with previous efforts. In general, these insights will allow us to optimize microbial biotechnological processes either through better control of mixed culture processes or by use of more efficient enzymes in bioengineering applications. © 2015 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  7. Integration and application of optical chemical sensors in microbioreactors.

    PubMed

    Gruber, Pia; Marques, Marco P C; Szita, Nicolas; Mayr, Torsten

    2017-08-08

    The quantification of key variables such as oxygen, pH, carbon dioxide, glucose, and temperature provides essential information for biological and biotechnological applications and their development. Microfluidic devices offer an opportunity to accelerate research and development in these areas due to their small scale, and the fine control over the microenvironment, provided that these key variables can be measured. Optical sensors are well-suited for this task. They offer non-invasive and non-destructive monitoring of the mentioned variables, and the establishment of time-course profiles without the need for sampling from the microfluidic devices. They can also be implemented in larger systems, facilitating cross-scale comparison of analytical data. This tutorial review presents an overview of the optical sensors and their technology, with a view to support current and potential new users in microfluidics and biotechnology in the implementation of such sensors. It introduces the benefits and challenges of sensor integration, including, their application for microbioreactors. Sensor formats, integration methods, device bonding options, and monitoring options are explained. Luminescent sensors for oxygen, pH, carbon dioxide, glucose and temperature are showcased. Areas where further development is needed are highlighted with the intent to guide future development efforts towards analytes for which reliable, stable, or easily integrated detection methods are not yet available.

  8. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  9. AUTOPLAN: A PC-based automated mission planning tool

    NASA Technical Reports Server (NTRS)

    Paterra, Frank C.; Allen, Marc S.; Lawrence, George F.

    1987-01-01

    A PC-based automated mission and resource planning tool, AUTOPLAN, is described, with application to small-scale planning and scheduling systems in the Space Station program. The input is a proposed mission profile, including mission duration, number of allowable slip periods, and requirement profiles for one or more resources as a function of time. A corresponding availability profile is also entered for each resource over the whole time interval under study. AUTOPLAN determines all integrated schedules which do not require more than the available resources.

  10. High-Fidelity, Computational Modeling of Non-Equilibrium Discharges for Combustion Applications

    DTIC Science & Technology

    2013-10-01

    gradient reconstruction)  4th order RK time integration  Domain decomposition parallel enabled Plasma chemistry mechanism 22  Methane-air... plasma chemistry mechanism  Species and pathways relevant to plasma time scale (~10’s ns)  26 Species : E, O, N2 , O2 , H , N2+ , O2+ , N4+ , O4...Photoionization (3-term Helmholtz equation model) 0.0067 0.0447 0.0346 0.1121 0.3059 0.5994 Plasma chemistry mechanism used in studies 81

  11. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    PubMed

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  12. Integrated lung tissue mechanics one piece at a time: Computational modeling across the scales of biology.

    PubMed

    Burrowes, Kelly S; Iravani, Amin; Kang, Wendy

    2018-01-12

    The lung is a delicately balanced and highly integrated mechanical system. Lung tissue is continuously exposed to the environment via the air we breathe, making it susceptible to damage. As a consequence, respiratory diseases present a huge burden on society and their prevalence continues to rise. Emergent function is produced not only by the sum of the function of its individual components but also by the complex feedback and interactions occurring across the biological scales - from genes to proteins, cells, tissue and whole organ - and back again. Computational modeling provides the necessary framework for pulling apart and putting back together the pieces of the body and organ systems so that we can fully understand how they function in both health and disease. In this review, we discuss models of lung tissue mechanics spanning from the protein level (the extracellular matrix) through to the level of cells, tissue and whole organ, many of which have been developed in isolation. This is a vital step in the process but to understand the emergent behavior of the lung, we must work towards integrating these component parts and accounting for feedback across the scales, such as mechanotransduction. These interactions will be key to unlocking the mechanisms occurring in disease and in seeking new pharmacological targets and improving personalized healthcare. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornreich, Drew E; Vaidya, Rajendra U; Ammerman, Curtt N

    Integrated Computational Materials Engineering (ICME) is a novel overarching approach to bridge length and time scales in computational materials science and engineering. This approach integrates all elements of multi-scale modeling (including various empirical and science-based models) with materials informatics to provide users the opportunity to tailor material selections based on stringent application needs. Typically, materials engineering has focused on structural requirements (stress, strain, modulus, fracture toughness etc.) while multi-scale modeling has been science focused (mechanical threshold strength model, grain-size models, solid-solution strengthening models etc.). Materials informatics (mechanical property inventories) on the other hand, is extensively data focused. All of thesemore » elements are combined within the framework of ICME to create architecture for the development, selection and design new composite materials for challenging environments. We propose development of the foundations for applying ICME to composite materials development for nuclear and high-radiation environments (including nuclear-fusion energy reactors, nuclear-fission reactors, and accelerators). We expect to combine all elements of current material models (including thermo-mechanical and finite-element models) into the ICME framework. This will be accomplished through the use of a various mathematical modeling constructs. These constructs will allow the integration of constituent models, which in tum would allow us to use the adaptive strengths of using a combinatorial scheme (fabrication and computational) for creating new composite materials. A sample problem where these concepts are used is provided in this summary.« less

  14. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    PubMed

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  15. A small-scale turbulence model

    NASA Technical Reports Server (NTRS)

    Lundgren, T. S.

    1993-01-01

    A previously derived analytical model for the small-scale structure of turbulence is reformulated in such a way that the energy spectrum may be computed. The model is an ensemble of two-dimensional (2D) vortices with internal spiral structure, each stretched by an axially symmetric strain flow. Stretching and differential rotation produce an energy cascade to smaller scales in which the stretching represents the effect of instabilities and the spiral structure is the source of dissipation at the end of the cascade. The energy spectrum of the resulting flow may be expressed as a time integration involving only the enstrophy spectrum of the time evolving 2D cross section flow, which may be obtained numerically. Examples are given in which a k exp -5/3 spectrum is obtained by this method. The k exp -5/3 inertial range spectrum is shown to be related to the existence of a self-similar enstrophy preserving range in the 2D enstrophy spectrum. The results are found to be insensitive to time dependence of the strain rate, including even intermittent on-or-off strains.

  16. Ultra-low switching energy and scaling in electric-field-controlled nanoscale magnetic tunnel junctions with high resistance-area product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grezes, C.; Alzate, J. G.; Cai, X.

    2016-01-04

    We report electric-field-induced switching with write energies down to 6 fJ/bit for switching times of 0.5 ns, in nanoscale perpendicular magnetic tunnel junctions (MTJs) with high resistance-area product and diameters down to 50 nm. The ultra-low switching energy is made possible by a thick MgO barrier that ensures negligible spin-transfer torque contributions, along with a reduction of the Ohmic dissipation. We find that the switching voltage and time are insensitive to the junction diameter for high-resistance MTJs, a result accounted for by a macrospin model of purely voltage-induced switching. The measured performance enables integration with same-size CMOS transistors in compact memorymore » and logic integrated circuits.« less

  17. Materials Integration and Doping of Carbon Nanotube-based Logic Circuits

    NASA Astrophysics Data System (ADS)

    Geier, Michael

    Over the last 20 years, extensive research into the structure and properties of single- walled carbon nanotube (SWCNT) has elucidated many of the exceptional qualities possessed by SWCNTs, including record-setting tensile strength, excellent chemical stability, distinctive optoelectronic features, and outstanding electronic transport characteristics. In order to exploit these remarkable qualities, many application-specific hurdles must be overcome before the material can be implemented in commercial products. For electronic applications, recent advances in sorting SWCNTs by electronic type have enabled significant progress towards SWCNT-based integrated circuits. Despite these advances, demonstrations of SWCNT-based devices with suitable characteristics for large-scale integrated circuits have been limited. The processing methodologies, materials integration, and mechanistic understanding of electronic properties developed in this dissertation have enabled unprecedented scales of SWCNT-based transistor fabrication and integrated circuit demonstrations. Innovative materials selection and processing methods are at the core of this work and these advances have led to transistors with the necessary transport properties required for modern circuit integration. First, extensive collaborations with other research groups allowed for the exploration of SWCNT thin-film transistors (TFTs) using a wide variety of materials and processing methods such as new dielectric materials, hybrid semiconductor materials systems, and solution-based printing of SWCNT TFTs. These materials were integrated into circuit demonstrations such as NOR and NAND logic gates, voltage-controlled ring oscillators, and D-flip-flops using both rigid and flexible substrates. This dissertation explores strategies for implementing complementary SWCNT-based circuits, which were developed by using local metal gate structures that achieve enhancement-mode p-type and n-type SWCNT TFTs with widely separated and symmetric threshold voltages. Additionally, a novel n-type doping procedure for SWCNT TFTs was also developed utilizing a solution-processed organometallic small molecule to demonstrate the first network top-gated n-type SWCNT TFTs. Lastly, new doping and encapsulation layers were incorporated to stabilize both p-type and n-type SWCNT TFT electronic properties, which enabled the fabrication of large-scale memory circuits. Employing these materials and processing advances has addressed many application specific barriers to commercialization. For instance, the first thin-film SWCNT complementary metal-oxide-semi-conductor (CMOS) logic devices are demonstrated with sub-nanowatt static power consumption and full rail-to-rail voltage transfer characteristics. With the introduction of a new n-type Rh-based molecular dopant, the first SWCNT TFTs are fabricated in top-gate geometries over large areas with high yield. Then by utilizing robust encapsulation methods, stable and uniform electronic performance of both p-type and n-type SWCNT TFTs has been achieved. Based on these complementary SWCNT TFTs, it is possible to simulate, design, and fabricate arrays of low-power static random access memory (SRAM) circuits, achieving large-scale integration for the first time based on solution-processed semiconductors. Together, this work provides a direct pathway for solution processable, large scale, power-efficient advanced integrated logic circuits and systems.

  18. Time- and polarity-dependent proteomic changes associated with homeostatic scaling at central synapses

    PubMed Central

    Schanzenbächer, Christoph T

    2018-01-01

    In homeostatic scaling at central synapses, the depth and breadth of cellular mechanisms that detect the offset from the set-point, detect the duration of the offset and implement a cellular response are not well understood. To understand the time-dependent scaling dynamics we treated cultured rat hippocampal cells with either TTX or bicucculline for 2 hr to induce the process of up- or down-scaling, respectively. During the activity manipulation we metabolically labeled newly synthesized proteins using BONCAT. We identified 168 newly synthesized proteins that exhibited significant changes in expression. To obtain a temporal trajectory of the response, we compared the proteins synthesized within 2 hr or 24 hr of the activity manipulation. Surprisingly, there was little overlap in the significantly regulated newly synthesized proteins identified in the early- and integrated late response datasets. There was, however, overlap in the functional categories that are modulated early and late. These data indicate that within protein function groups, different proteomic choices can be made to effect early and late homeostatic responses that detect the duration and polarity of the activity manipulation. PMID:29447110

  19. Classification of Animal Movement Behavior through Residence in Space and Time.

    PubMed

    Torres, Leigh G; Orben, Rachael A; Tolkova, Irina; Thompson, David R

    2017-01-01

    Identification and classification of behavior states in animal movement data can be complex, temporally biased, time-intensive, scale-dependent, and unstandardized across studies and taxa. Large movement datasets are increasingly common and there is a need for efficient methods of data exploration that adjust to the individual variability of each track. We present the Residence in Space and Time (RST) method to classify behavior patterns in movement data based on the concept that behavior states can be partitioned by the amount of space and time occupied in an area of constant scale. Using normalized values of Residence Time and Residence Distance within a constant search radius, RST is able to differentiate behavior patterns that are time-intensive (e.g., rest), time & distance-intensive (e.g., area restricted search), and transit (short time and distance). We use grey-headed albatross (Thalassarche chrysostoma) GPS tracks to demonstrate RST's ability to classify behavior patterns and adjust to the inherent scale and individuality of each track. Next, we evaluate RST's ability to discriminate between behavior states relative to other classical movement metrics. We then temporally sub-sample albatross track data to illustrate RST's response to less resolved data. Finally, we evaluate RST's performance using datasets from four taxa with diverse ecology, functional scales, ecosystems, and data-types. We conclude that RST is a robust, rapid, and flexible method for detailed exploratory analysis and meta-analyses of behavioral states in animal movement data based on its ability to integrate distance and time measurements into one descriptive metric of behavior groupings. Given the increasing amount of animal movement data collected, it is timely and useful to implement a consistent metric of behavior classification to enable efficient and comparative analyses. Overall, the application of RST to objectively explore and compare behavior patterns in movement data can enhance our fine- and broad- scale understanding of animal movement ecology.

  20. Critical reflexivity in financial markets: a Hawkes process analysis

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bercot, Nicolas; Bouchaud, Jean-Philippe

    2013-10-01

    We model the arrival of mid-price changes in the E-mini S&P futures contract as a self-exciting Hawkes process. Using several estimation methods, we find that the Hawkes kernel is power-law with a decay exponent close to -1.15 at short times, less than ≈ 103 s, and crosses over to a second power-law regime with a larger decay exponent ≈-1.45 for longer times scales in the range [ 103,106 ] seconds. More importantly, we find that the Hawkes kernel integrates to unity independently of the analysed period, from 1998 to 2011. This suggests that markets are and have always been close to criticality, challenging a recent study which indicates that reflexivity (endogeneity) has increased in recent years as a result of increased automation of trading. However, we note that the scale over which market events are correlated has decreased steadily over time with the emergence of higher frequency trading.

Top