Sample records for variable integration time

  1. Time-Integral Correlations of Multiple Variables With the Relativistic-Electron Flux at Geosynchronous Orbit: The Strong Roles of Substorm-Injected Electrons and the Ion Plasma Sheet

    NASA Astrophysics Data System (ADS)

    Borovsky, Joseph E.

    2017-12-01

    Time-integral correlations are examined between the geosynchronous relativistic electron flux index Fe1.2 and 31 variables of the solar wind and magnetosphere. An "evolutionary algorithm" is used to maximize correlations. Time integrations (into the past) of the variables are found to be superior to time-lagged variables for maximizing correlations with the radiation belt. Physical arguments are given as to why. Dominant correlations are found for the substorm-injected electron flux at geosynchronous orbit and for the pressure of the ion plasma sheet. Different sets of variables are constructed and correlated with Fe1.2: some sets maximize the correlations, and some sets are based on purely solar wind variables. Examining known physical mechanisms that act on the radiation belt, sets of correlations are constructed (1) using magnetospheric variables that control those physical mechanisms and (2) using the solar wind variables that control those magnetospheric variables. Fe1.2-increasing intervals are correlated separately from Fe1.2-decreasing intervals, and the introduction of autoregression into the time-integral correlations is explored. A great impediment to discerning physical cause and effect from the correlations is the fact that all solar wind variables are intercorrelated and carry much of the same information about the time sequence of the solar wind that drives the time sequence of the magnetosphere.

  2. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  3. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  4. Subwavelength grating enabled on-chip ultra-compact optical true time delay line

    PubMed Central

    Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R.

    2016-01-01

    An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth. PMID:27457024

  5. Subwavelength grating enabled on-chip ultra-compact optical true time delay line.

    PubMed

    Wang, Junjia; Ashrafi, Reza; Adams, Rhys; Glesk, Ivan; Gasulla, Ivana; Capmany, José; Chen, Lawrence R

    2016-07-26

    An optical true time delay line (OTTDL) is a basic photonic building block that enables many microwave photonic and optical processing operations. The conventional design for an integrated OTTDL that is based on spatial diversity uses a length-variable waveguide array to create the optical time delays, which can introduce complexities in the integrated circuit design. Here we report the first ever demonstration of an integrated index-variable OTTDL that exploits spatial diversity in an equal length waveguide array. The approach uses subwavelength grating waveguides in silicon-on-insulator (SOI), which enables the realization of OTTDLs having a simple geometry and that occupy a compact chip area. Moreover, compared to conventional wavelength-variable delay lines with a few THz operation bandwidth, our index-variable OTTDL has an extremely broad operation bandwidth practically exceeding several tens of THz, which supports operation for various input optical signals with broad ranges of central wavelength and bandwidth.

  6. Parareal algorithms with local time-integrators for time fractional differential equations

    NASA Astrophysics Data System (ADS)

    Wu, Shu-Lin; Zhou, Tao

    2018-04-01

    It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.

  7. On-chip optical phase locking of single growth monolithically integrated Slotted Fabry Perot lasers.

    PubMed

    Morrissey, P E; Cotter, W; Goulding, D; Kelleher, B; Osborne, S; Yang, H; O'Callaghan, J; Roycroft, B; Corbett, B; Peters, F H

    2013-07-15

    This work investigates the optical phase locking performance of Slotted Fabry Perot (SFP) lasers and develops an integrated variable phase locked system on chip for the first time to our knowledge using these lasers. Stable phase locking is demonstrated between two SFP lasers coupled on chip via a variable gain waveguide section. The two lasers are biased differently, one just above the threshold current of the device with the other at three times this value. The coupling between the lasers can be controlled using the variable gain section which can act as a variable optical attenuator or amplifier depending on bias. Using this, the width of the stable phase locking region on chip is shown to be variable.

  8. Microphysical Timescales in Clouds and their Application in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Simpson, Joanne

    2007-01-01

    Independent prognostic variables in cloud-resolving modeling are chosen on the basis of the analysis of microphysical timescales in clouds versus a time step for numerical integration. Two of them are the moist entropy and the total mixing ratio of airborne water with no contributions from precipitating particles. As a result, temperature can be diagnosed easily from those prognostic variables, and cloud microphysics be separated (or modularized) from moist thermodynamics. Numerical comparison experiments show that those prognostic variables can work well while a large time step (e.g., 10 s) is used for numerical integration.

  9. Analysis of real-time numerical integration methods applied to dynamic clamp experiments.

    PubMed

    Butera, Robert J; McCarthy, Maeve L

    2004-12-01

    Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.

  10. Integrating Cost as an Independent Variable Analysis with Evolutionary Acquisition - A Multiattribute Design Evaluation Approach

    DTIC Science & Technology

    2003-03-01

    within the Automated Cost Estimating Integrated Tools ( ACEIT ) software suite (version 5.x). With this capability, one can set cost targets or time...not allow the user to vary more than one decision variable. This limitation of the ACEIT approach thus hinders a holistic view when attempting to

  11. Integrable equations of the infinite nonlinear Schrödinger equation hierarchy with time variable coefficients.

    PubMed

    Kedziora, D J; Ankiewicz, A; Chowdury, A; Akhmediev, N

    2015-10-01

    We present an infinite nonlinear Schrödinger equation hierarchy of integrable equations, together with the recurrence relations defining it. To demonstrate integrability, we present the Lax pairs for the whole hierarchy, specify its Darboux transformations and provide several examples of solutions. These resulting wavefunctions are given in exact analytical form. We then show that the Lax pair and Darboux transformation formalisms still apply in this scheme when the coefficients in the hierarchy depend on the propagation variable (e.g., time). This extension thus allows for the construction of complicated solutions within a greatly diversified domain of generalised nonlinear systems.

  12. Rain rate and modeled fade distributions at 20 GHz and 30 GHz derived from five years of network rain gauge measurements

    NASA Technical Reports Server (NTRS)

    Goldhirsh, Julius; Krichevsky, Vladimir; Gebo, Norman

    1992-01-01

    Five years of rain rate and modeled slant path attenuation distributions at 20 GHz and 30 GHz derived from a network of 10 tipping bucket rain gages was examined. The rain gage network is located within a grid 70 km north-south and 47 km east-west in the Mid-Atlantic coast of the United States in the vicinity of Wallops Island, Virginia. Distributions were derived from the variable integration time data and from one minute averages. It was demonstrated that for realistic fade margins, the variable integration time results are adequate to estimate slant path attenuations at frequencies above 20 GHz using models which require one minute averages. An accurate empirical formula was developed to convert the variable integration time rain rates to one minute averages. Fade distributions at 20 GHz and 30 GHz were derived employing Crane's Global model because it was demonstrated to exhibit excellent accuracy with measured COMSTAR fades at 28.56 GHz.

  13. Importance of the cutoff value in the quadratic adaptive integrate-and-fire model.

    PubMed

    Touboul, Jonathan

    2009-08-01

    The quadratic adaptive integrate-and-fire model (Izhikevich, 2003 , 2007 ) is able to reproduce various firing patterns of cortical neurons and is widely used in large-scale simulations of neural networks. This model describes the dynamics of the membrane potential by a differential equation that is quadratic in the voltage, coupled to a second equation for adaptation. Integration is stopped during the rise phase of a spike at a voltage cutoff value V(c) or when it blows up. Subsequently the membrane potential is reset, and the adaptation variable is increased by a fixed amount. We show in this note that in the absence of a cutoff value, not only the voltage but also the adaptation variable diverges in finite time during spike generation in the quadratic model. The divergence of the adaptation variable makes the system very sensitive to the cutoff: changing V(c) can dramatically alter the spike patterns. Furthermore, from a computational viewpoint, the divergence of the adaptation variable implies that the time steps for numerical simulation need to be small and adaptive. However, divergence of the adaptation variable does not occur for the quartic model (Touboul, 2008 ) and the adaptive exponential integrate-and-fire model (Brette & Gerstner, 2005 ). Hence, these models are robust to changes in the cutoff value.

  14. A comparative assessment of R. M. Young and tipping bucket rain gauges

    NASA Technical Reports Server (NTRS)

    Goldhirsh, Julius; Gebo, Norman E.

    1992-01-01

    Rain rates as derived from standard tipping bucket rain gauges have variable integration times corresponding to the interval between bucket tips. For example, the integration time for the Weathertronics rain gauge is given by delta(T) = 15.24/R (min), where R is the rain rate expressed in mm/h and delta(T) is the time between tips expressed in minutes. It is apparent that a rain rate of 1 mm/h has an integration time in excess of 15 minutes. Rain rates larger than 15.24 mm/h will have integration times smaller than 1 minute. The integration time is dictated by the time it takes to fill a small tipping bucket where each tip gives rise to 0.254 mm of rainfall. Hence, a uniform rain rate of 1 mm/h over a 15 minute period will give rise to the same rain rate as 0 mm/h rainfall over the first 14 minutes and 15 mm/h between 14 to 15 minutes from the reference tip. Hence, the rain intensity fluctuations may not be captured with the tipping bucket rain gauge for highly variable rates encompassing lower and higher values over a given integration time. The objective of this effort is to provide an assessment of the features of the R. M. Young capacitive gauge and to compare these features with those of the standard tipping bucket rain gauge. A number of rain rate-time series derived from measurements with approximately co-located gauges are examined.

  15. Variable Structure PID Control to Prevent Integrator Windup

    NASA Technical Reports Server (NTRS)

    Hall, C. E.; Hodel, A. S.; Hung, J. Y.

    1999-01-01

    PID controllers are frequently used to control systems requiring zero steady-state error while maintaining requirements for settling time and robustness (gain/phase margins). PID controllers suffer significant loss of performance due to short-term integrator wind-up when used in systems with actuator saturation. We examine several existing and proposed methods for the prevention of integrator wind-up in both continuous and discrete time implementations.

  16. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  17. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  18. An integration time adaptive control method for atmospheric composition detection of occultation

    NASA Astrophysics Data System (ADS)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  19. Notion de temps d'apprentissage et son evaluation en situation d'enseignement (The Idea of Learning Time and Its Evaluation in Teaching Situations).

    ERIC Educational Resources Information Center

    Brunelle, Jean; And Others

    1983-01-01

    The article explains how the time that students devote to learning was identified as a variable in instruction effectiveness studies and shows how the variable was integrated into research on the effectiveness of physical education instruction. The article describes a French version of the "ALT-PE" system on estimating learning time. (SB)

  20. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  1. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  2. Entropy-Based Analysis and Bioinformatics-Inspired Integration of Global Economic Information Transfer

    PubMed Central

    An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh

    2013-01-01

    The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis. PMID:23300959

  3. Entropy-based analysis and bioinformatics-inspired integration of global economic information transfer.

    PubMed

    Kim, Jinkyu; Kim, Gunn; An, Sungbae; Kwon, Young-Kyun; Yoon, Sungroh

    2013-01-01

    The assessment of information transfer in the global economic network helps to understand the current environment and the outlook of an economy. Most approaches on global networks extract information transfer based mainly on a single variable. This paper establishes an entirely new bioinformatics-inspired approach to integrating information transfer derived from multiple variables and develops an international economic network accordingly. In the proposed methodology, we first construct the transfer entropies (TEs) between various intra- and inter-country pairs of economic time series variables, test their significances, and then use a weighted sum approach to aggregate information captured in each TE. Through a simulation study, the new method is shown to deliver better information integration compared to existing integration methods in that it can be applied even when intra-country variables are correlated. Empirical investigation with the real world data reveals that Western countries are more influential in the global economic network and that Japan has become less influential following the Asian currency crisis.

  4. Investigation of the marked and long-standing spatial inhomogeneity of the Hungarian suicide rate: a spatial regression approach.

    PubMed

    Balint, Lajos; Dome, Peter; Daroczi, Gergely; Gonda, Xenia; Rihmer, Zoltan

    2014-02-01

    In the last century Hungary had astonishingly high suicide rates characterized by marked regional within-country inequalities, a spatial pattern which has been quite stable over time. To explain the above phenomenon at the level of micro-regions (n=175) in the period between 2005 and 2011. Our dependent variable was the age and gender standardized mortality ratio (SMR) for suicide while explanatory variables were factors which are supposed to influence suicide risk, such as measures of religious and political integration, travel time accessibility of psychiatric services, alcohol consumption, unemployment and disability pensionery. When applying the ordinary least squared regression model, the residuals were found to be spatially autocorrelated, which indicates the violation of the assumption on the independence of error terms and - accordingly - the necessity of application of a spatial autoregressive (SAR) model to handle this problem. According to our calculations the SARlag model was a better way (versus the SARerr model) of addressing the problem of spatial autocorrelation, furthermore its substantive meaning is more convenient. SMR was significantly associated with the "political integration" variable in a negative and with "lack of religious integration" and "disability pensionery" variables in a positive manner. Associations were not significant for the remaining explanatory variables. Several important psychiatric variables were not available at the level of micro-regions. We conducted our analysis on aggregate data. Our results may draw attention to the relevance and abiding validity of the classic Durkheimian suicide risk factors - such as lack of social integration - apropos of the spatial pattern of Hungarian suicides. © 2013 Published by Elsevier B.V.

  5. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  6. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  7. Simple satellite orbit propagator

    NASA Astrophysics Data System (ADS)

    Gurfil, P.

    2008-06-01

    An increasing number of space missions require on-board autonomous orbit determination. The purpose of this paper is to develop a simple orbit propagator (SOP) for such missions. Since most satellites are limited by the available processing power, it is important to develop an orbit propagator that will use limited computational and memory resources. In this work, we show how to choose state variables for propagation using the simplest numerical integration scheme available-the explicit Euler integrator. The new state variables are derived by the following rationale: Apply a variation-of-parameters not on the gravity-affected orbit, but rather on the gravity-free orbit, and teart the gravity as a generalized force. This ultimately leads to a state vector comprising the inertial velocity and a modified position vector, wherein the product of velocity and time is subtracted from the inertial position. It is shown that the explicit Euler integrator, applied on the new state variables, becomes a symplectic integrator, preserving the Hamiltonian and the angular momentum (or a component thereof in the case of oblateness perturbations). The main application of the proposed propagator is estimation of mean orbital elements. It is shown that the SOP is capable of estimating the mean elements with an accuracy that is comparable to a high-order integrator that consumes an order-of-magnitude more computational time than the SOP.

  8. Quality-by-Design (QbD): An integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and process design space development.

    PubMed

    Wu, Huiquan; White, Maury; Khan, Mansoor A

    2011-02-28

    The aim of this work was to develop an integrated process analytical technology (PAT) approach for a dynamic pharmaceutical co-precipitation process characterization and design space development. A dynamic co-precipitation process by gradually introducing water to the ternary system of naproxen-Eudragit L100-alcohol was monitored at real-time in situ via Lasentec FBRM and PVM. 3D map of count-time-chord length revealed three distinguishable process stages: incubation, transition, and steady-state. The effects of high risk process variables (slurry temperature, stirring rate, and water addition rate) on both derived co-precipitation process rates and final chord-length-distribution were evaluated systematically using a 3(3) full factorial design. Critical process variables were identified via ANOVA for both transition and steady state. General linear models (GLM) were then used for parameter estimation for each critical variable. Clear trends about effects of each critical variable during transition and steady state were found by GLM and were interpreted using fundamental process principles and Nyvlt's transfer model. Neural network models were able to link process variables with response variables at transition and steady state with R(2) of 0.88-0.98. PVM images evidenced nucleation and crystal growth. Contour plots illustrated design space via critical process variables' ranges. It demonstrated the utility of integrated PAT approach for QbD development. Published by Elsevier B.V.

  9. Time-of-flight depth image enhancement using variable integration time

    NASA Astrophysics Data System (ADS)

    Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong

    2013-03-01

    Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.

  10. Variable-spot ion beam figuring

    NASA Astrophysics Data System (ADS)

    Wu, Lixiang; Qiu, Keqiang; Fu, Shaojun

    2016-03-01

    This paper introduces a new scheme of ion beam figuring (IBF), or rather variable-spot IBF, which is conducted at a constant scanning velocity with variable-spot ion beam collimated by a variable diaphragm. It aims at improving the reachability and adaptation of the figuring process within the limits of machine dynamics by varying the ion beam spot size instead of the scanning velocity. In contrast to the dwell time algorithm in the conventional IBF, the variable-spot IBF adopts a new algorithm, which consists of the scan path programming and the trajectory optimization using pattern search. In this algorithm, instead of the dwell time, a new concept, integral etching time, is proposed to interpret the process of variable-spot IBF. We conducted simulations to verify its feasibility and practicality. The simulation results indicate the variable-spot IBF is a promising alternative to the conventional approach.

  11. Least-rattling feedback from strong time-scale separation

    NASA Astrophysics Data System (ADS)

    Chvykov, Pavel; England, Jeremy

    2018-03-01

    In most interacting many-body systems associated with some "emergent phenomena," we can identify subgroups of degrees of freedom that relax on dramatically different time scales. Time-scale separation of this kind is particularly helpful in nonequilibrium systems where only the fast variables are subjected to external driving; in such a case, it may be shown through elimination of fast variables that the slow coordinates effectively experience a thermal bath of spatially varying temperature. In this paper, we investigate how such a temperature landscape arises according to how the slow variables affect the character of the driven quasisteady state reached by the fast variables. Brownian motion in the presence of spatial temperature gradients is known to lead to the accumulation of probability density in low-temperature regions. Here, we focus on the implications of attraction to low effective temperature for the long-term evolution of slow variables. After quantitatively deriving the temperature landscape for a general class of overdamped systems using a path-integral technique, we then illustrate in a simple dynamical system how the attraction to low effective temperature has a fine-tuning effect on the slow variable, selecting configurations that bring about exceptionally low force fluctuation in the fast-variable steady state. We furthermore demonstrate that a particularly strong effect of this kind can take place when the slow variable is tuned to bring about orderly, integrable motion in the fast dynamics that avoids thermalizing energy absorbed from the drive. We thus point to a potentially general feedback mechanism in multi-time-scale active systems, that leads to the exploration of slow variable space, as if in search of fine tuning for a "least-rattling" response in the fast coordinates.

  12. Explaining Changing Suicide Rates in Norway 1948-2004: The Role of Social Integration

    ERIC Educational Resources Information Center

    Barstad, Anders

    2008-01-01

    Using Norway 1948-2004 as a case, I test whether changes in variables related to social integration can explain changes in suicide rates. The method is the Box-Jenkins approach to time-series analysis. Different aspects of family integration contribute significantly to the explanation of Norwegian suicide rates in this period. The estimated effect…

  13. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  14. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  15. Research on Zheng Classification Fusing Pulse Parameters in Coronary Heart Disease

    PubMed Central

    Guo, Rui; Wang, Yi-Qin; Xu, Jin; Yan, Hai-Xia; Yan, Jian-Jun; Li, Fu-Feng; Xu, Zhao-Xia; Xu, Wen-Jie

    2013-01-01

    This study was conducted to illustrate that nonlinear dynamic variables of Traditional Chinese Medicine (TCM) pulse can improve the performances of TCM Zheng classification models. Pulse recordings of 334 coronary heart disease (CHD) patients and 117 normal subjects were collected in this study. Recurrence quantification analysis (RQA) was employed to acquire nonlinear dynamic variables of pulse. TCM Zheng models in CHD were constructed, and predictions using a novel multilabel learning algorithm based on different datasets were carried out. Datasets were designed as follows: dataset1, TCM inquiry information including inspection information; dataset2, time-domain variables of pulse and dataset1; dataset3, RQA variables of pulse and dataset1; and dataset4, major principal components of RQA variables and dataset1. The performances of the different models for Zheng differentiation were compared. The model for Zheng differentiation based on RQA variables integrated with inquiry information had the best performance, whereas that based only on inquiry had the worst performance. Meanwhile, the model based on time-domain variables of pulse integrated with inquiry fell between the above two. This result showed that RQA variables of pulse can be used to construct models of TCM Zheng and improve the performance of Zheng differentiation models. PMID:23737839

  16. The NASA Lewis integrated propulsion and flight control simulator

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.

    1991-01-01

    A new flight simulation facility has been developed at NASA Lewis to allow integrated propulsion-control and flight-control algorithm development and evaluation in real time. As a preliminary check of the simulator facility and the correct integration of its components, the control design and physics models for an STOVL fighter aircraft model have been demonstrated, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The results show that this fixed-based flight simulator can provide real-time feedback and display of both airframe and propulsion variables for validation of integrated systems and testing of control design methodologies and cockpit mechanizations.

  17. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.

  18. Langevin dynamics for vector variables driven by multiplicative white noise: A functional formalism

    NASA Astrophysics Data System (ADS)

    Moreno, Miguel Vera; Arenas, Zochil González; Barci, Daniel G.

    2015-04-01

    We discuss general multidimensional stochastic processes driven by a system of Langevin equations with multiplicative white noise. In particular, we address the problem of how time reversal diffusion processes are affected by the variety of conventions available to deal with stochastic integrals. We present a functional formalism to build up the generating functional of correlation functions without any type of discretization of the Langevin equations at any intermediate step. The generating functional is characterized by a functional integration over two sets of commuting variables, as well as Grassmann variables. In this representation, time reversal transformation became a linear transformation in the extended variables, simplifying in this way the complexity introduced by the mixture of prescriptions and the associated calculus rules. The stochastic calculus is codified in our formalism in the structure of the Grassmann algebra. We study some examples such as higher order derivative Langevin equations and the functional representation of the micromagnetic stochastic Landau-Lifshitz-Gilbert equation.

  19. Taylor Series Trajectory Calculations Including Oblateness Effects and Variable Atmospheric Density

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2011-01-01

    Taylor series integration is implemented in NASA Glenn's Spacecraft N-body Analysis Program, and compared head-to-head with the code's existing 8th- order Runge-Kutta Fehlberg time integration scheme. This paper focuses on trajectory problems that include oblateness and/or variable atmospheric density. Taylor series is shown to be significantly faster and more accurate for oblateness problems up through a 4x4 field, with speedups ranging from a factor of 2 to 13. For problems with variable atmospheric density, speedups average 24 for atmospheric density alone, and average 1.6 to 8.2 when density and oblateness are combined.

  20. Accurate and efficient integration for molecular dynamics simulations at constant temperature and pressure

    NASA Astrophysics Data System (ADS)

    Lippert, Ross A.; Predescu, Cristian; Ierardi, Douglas J.; Mackenzie, Kenneth M.; Eastwood, Michael P.; Dror, Ron O.; Shaw, David E.

    2013-10-01

    In molecular dynamics simulations, control over temperature and pressure is typically achieved by augmenting the original system with additional dynamical variables to create a thermostat and a barostat, respectively. These variables generally evolve on timescales much longer than those of particle motion, but typical integrator implementations update the additional variables along with the particle positions and momenta at each time step. We present a framework that replaces the traditional integration procedure with separate barostat, thermostat, and Newtonian particle motion updates, allowing thermostat and barostat updates to be applied infrequently. Such infrequent updates provide a particularly substantial performance advantage for simulations parallelized across many computer processors, because thermostat and barostat updates typically require communication among all processors. Infrequent updates can also improve accuracy by alleviating certain sources of error associated with limited-precision arithmetic. In addition, separating the barostat, thermostat, and particle motion update steps reduces certain truncation errors, bringing the time-average pressure closer to its target value. Finally, this framework, which we have implemented on both general-purpose and special-purpose hardware, reduces software complexity and improves software modularity.

  1. Integrated time-lapse and single-cell transcription studies highlight the variable and dynamic nature of human hematopoietic cell fate commitment

    PubMed Central

    Moussy, Alice; Cosette, Jérémie; Parmentier, Romuald; da Silva, Cindy; Corre, Guillaume; Richard, Angélique; Gandrillon, Olivier; Stockholm, Daniel

    2017-01-01

    Individual cells take lineage commitment decisions in a way that is not necessarily uniform. We address this issue by characterising transcriptional changes in cord blood-derived CD34+ cells at the single-cell level and integrating data with cell division history and morphological changes determined by time-lapse microscopy. We show that major transcriptional changes leading to a multilineage-primed gene expression state occur very rapidly during the first cell cycle. One of the 2 stable lineage-primed patterns emerges gradually in each cell with variable timing. Some cells reach a stable morphology and molecular phenotype by the end of the first cell cycle and transmit it clonally. Others fluctuate between the 2 phenotypes over several cell cycles. Our analysis highlights the dynamic nature and variable timing of cell fate commitment in hematopoietic cells, links the gene expression pattern to cell morphology, and identifies a new category of cells with fluctuating phenotypic characteristics, demonstrating the complexity of the fate decision process (which is different from a simple binary switch between 2 options, as it is usually envisioned). PMID:28749943

  2. Efficient variable time-stepping scheme for intense field-atom interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, C.; Kosloff, R.

    1993-03-01

    The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less

  3. Spectral analysis of temporal non-stationary rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2018-04-01

    This study treats the catchment as a block box system with considering the rainfall input and runoff output being a stochastic process. The temporal rainfall-runoff relationship at the catchment scale is described by a convolution integral on a continuous time scale. Using the Fourier-Stieltjes representation approach, a frequency domain solution to the convolution integral is developed to the spectral analysis of runoff processes generated by temporal non-stationary rainfall events. It is shown that the characteristic time scale of rainfall process increases the runoff discharge variability, while the catchment mean travel time constant plays the role in reducing the variability of runoff discharge. Similar to the behavior of groundwater aquifers, catchments act as a low-pass filter in the frequency domain for the rainfall input signal.

  4. The relationship between foot posture and plantar pressure during walking in adults: A systematic review.

    PubMed

    Buldt, Andrew K; Allan, Jamie J; Landorf, Karl B; Menz, Hylton B

    2018-02-23

    Foot posture is a risk factor for some lower limb injuries, however the underlying mechanism is not well understood. Plantar pressure analysis is one technique to investigate the interaction between foot posture and biomechanical function of the lower limb. The aim of this review was to investigate the relationship between foot posture and plantar pressure during walking. A systematic database search was conducted using MEDLINE, CINAHL, SPORTDiscus and Embase to identify studies that have assessed the relationship between foot posture and plantar pressure during walking. Included studies were assessed for methodological quality. Meta-analysis was not conducted due to heterogeneity between studies. Inconsistencies included foot posture classification techniques, gait analysis protocols, selection of plantar pressure parameters and statistical analysis approaches. Of the 4213 citations identified for title and abstract review, sixteen studies were included and underwent quality assessment; all were of moderate methodological quality. There was some evidence that planus feet display higher peak pressure, pressure-time integral, maximum force, force-time integral and contact area predominantly in the medial arch, central forefoot and hallux, while these variables are lower in the lateral and medial forefoot. In contrast, cavus feet display higher peak pressure and pressure-time integral in the heel and lateral forefoot, while pressure-time integral, maximum force, force-time integral and contact area are lower for the midfoot and hallux. Centre of pressure was more laterally deviated in cavus feet and more medially deviated in planus feet. Overall, effect sizes were moderate, but regression models could only explain a small amount of variance in plantar pressure variables. Despite these significant findings, future research would benefit from greater methodological rigour, particularly in relation to the use of valid foot posture measurement techniques, gait analysis protocols, and standardised approaches for analysis and reporting of plantar pressure variables. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Does integrity of the lesser trochanter influence the surgical outcome of intertrochanteric fracture in elderly patients?

    PubMed

    Liu, Xiaohui; Liu, Yueju; Pan, Shuo; Cao, Huijian; Yu, Dahai

    2015-03-05

    Most surgeons do not fix the lesser trochanter when managing femoral intertrochanteric fractures with intramedullary nails. We have not found any published clinical studies on the relationship between the integrity of the lesser trochanter and surgical outcomes of intertrochanteric fractures treated with intramedullary nails. The purpose of this study was to evaluate the impact of the integrity of the lesser trochanter on the surgical outcome of intertrochanteric fractures. A retrospective review of 85 patients aged more than 60 years with femoral intertrochanteric fractures from January 2010 to July 2012 was performed. The patients were allocated to two groups: those with (n = 37) and without (n = 48) preoperative integrity of the lesser trochanter. Relevant patient variables and medical comorbidities were collected. Medical comorbidities were evaluated according to the American Society of Anesthesiologists classification and medical records were also reviewed for age, sex, time from injury to operation, intraoperative blood loss, volume of transfusion, operative time, length of stay, time to fracture union, Harris Hip Score 1 year postoperatively, and incidence of postoperative complications. Postoperative complications included deep infection (beneath the fascia lata), congestive heart failure, pulmonary embolus, cerebrovascular accident, pneumonia, cardiac arrhythmia, urinary tract infection, wound hematoma, pressure sores, delirium, and deep venous thrombosis. Variables were statistically compared between the two groups, with statistical significance at P<0.05. Patients with and without preoperative integrity of the lesser trochanter were comparable for all assessed clinical variables except fracture type (P < 0.05). There were no statistically significant differences between these groups in time from injury to operation, volume of transfusion, length of stay, time to fracture union, Harris Hip Score at 1 year postoperatively, and incidence of postoperative complication (P > 0.05). The group with preoperative integrity of the lesser trochanter had significantly less blood loss (107.03 ± 49.21 mL) than those without it (133.96 ± 58.08 mL) (P < 0.05) and the operative time was significantly shorter in the former (0.77 ± 0.07 hours) than the latter (0.84 ± 0.11 hours) group (P < 0.05). The integrity of the lesser trochanter has no significant influence on the surgical outcome of intramedullary nail internal fixation of femoral intertrochanteric fractures.

  6. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    For nearly a century, global power systems have focused on three key functions: generating, transmitting, and distributing electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load-despite variability in load on time scales ranging from subsecond disturbances to multiyear trends. With the increasing role of variable generation from wind and solar, the retirement of fossil-fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  7. On coarse projective integration for atomic deposition in amorphous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov

    2015-10-07

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  8. Determining probability distribution of coherent integration time near 133 Hz and 1346 km in the Pacific Ocean.

    PubMed

    Spiesberger, John L

    2013-02-01

    The hypothesis tested is that internal gravity waves limit the coherent integration time of sound at 1346 km in the Pacific ocean at 133 Hz and a pulse resolution of 0.06 s. Six months of continuous transmissions at about 18 min intervals are examined. The source and receiver are mounted on the bottom of the ocean with timing governed by atomic clocks. Measured variability is only due to fluctuations in the ocean. A model for the propagation of sound through fluctuating internal waves is run without any tuning with data. Excellent resemblance is found between the model and data's probability distributions of integration time up to five hours.

  9. Workplace-Based Practicum: Enabling Expansive Practices

    ERIC Educational Resources Information Center

    Pridham, Bruce A.; Deed, Craig; Cox, Peter

    2013-01-01

    Effective pre-service teacher education integrates theoretical and practical knowledge. One means of integration is practicum in a school workplace. In a time of variable approaches to, and models of, practicum, we outline an innovative model of school immersion as part of a teacher preparation program. We apply Fuller and Unwin's (2004) expansive…

  10. Ecosystem functioning is enveloped by hydrometeorological variability.

    PubMed

    Pappas, Christoforos; Mahecha, Miguel D; Frank, David C; Babst, Flurin; Koutsoyiannis, Demetris

    2017-09-01

    Terrestrial ecosystem processes, and the associated vegetation carbon dynamics, respond differently to hydrometeorological variability across timescales, and so does our scientific understanding of the underlying mechanisms. Long-term variability of the terrestrial carbon cycle is not yet well constrained and the resulting climate-biosphere feedbacks are highly uncertain. Here we present a comprehensive overview of hydrometeorological and ecosystem variability from hourly to decadal timescales integrating multiple in situ and remote-sensing datasets characterizing extra-tropical forest sites. We find that ecosystem variability at all sites is confined within a hydrometeorological envelope across sites and timescales. Furthermore, ecosystem variability demonstrates long-term persistence, highlighting ecological memory and slow ecosystem recovery rates after disturbances. However, simulation results with state-of-the-art process-based models do not reflect this long-term persistent behaviour in ecosystem functioning. Accordingly, we develop a cross-time-scale stochastic framework that captures hydrometeorological and ecosystem variability. Our analysis offers a perspective for terrestrial ecosystem modelling and paves the way for new model-data integration opportunities in Earth system sciences.

  11. Predicting the Persistence of Full-Time African-American Students Attending 4-Year Public Colleges: A Disaggregation of Financial Aid Packaging and Social and Academic Integration Variables

    ERIC Educational Resources Information Center

    Smith, Curt L.

    2010-01-01

    The purpose of the study was to investigate to what extent do demographic characteristics, high school experience, aspirations and achievement, college experience-academic integration, college experience-social integration, financial aid, and price influence the first-year persistence of African-American students attending 4-year public colleges.…

  12. Decay of Solutions of the Wave Equation in the Kerr Geometry

    NASA Astrophysics Data System (ADS)

    Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.

    2006-06-01

    We consider the Cauchy problem for the massless scalar wave equation in the Kerr geometry for smooth initial data compactly supported outside the event horizon. We prove that the solutions decay in time in L ∞ loc. The proof is based on a representation of the solution as an infinite sum over the angular momentum modes, each of which is an integral of the energy variable ω on the real line. This integral representation involves solutions of the radial and angular ODEs which arise in the separation of variables.

  13. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables.

  14. Cumulative lactate and hospital mortality in ICU patients

    PubMed Central

    2013-01-01

    Background Both hyperlactatemia and persistence of hyperlactatemia have been associated with bad outcome. We compared lactate and lactate-derived variables in outcome prediction. Methods Retrospective observational study. Case records from 2,251 consecutive intensive care unit (ICU) patients admitted between 2001 and 2007 were analyzed. Baseline characteristics, all lactate measurements, and in-hospital mortality were recorded. The time integral of arterial blood lactate levels above the upper normal threshold of 2.2 mmol/L (lactate-time-integral), maximum lactate (max-lactate), and time-to-first-normalization were calculated. Survivors and nonsurvivors were compared and receiver operating characteristic (ROC) analysis were applied. Results A total of 20,755 lactate measurements were analyzed. Data are srpehown as median [interquartile range]. In nonsurvivors (n = 405) lactate-time-integral (192 [0–1881] min·mmol/L) and time-to-first normalization (44.0 [0–427] min) were higher than in hospital survivors (n = 1846; 0 [0–134] min·mmol/L and 0 [0–75] min, respectively; all p < 0.001). Normalization of lactate <6 hours after ICU admission revealed better survival compared with normalization of lactate >6 hours (mortality 16.6% vs. 24.4%; p < 0.001). AUC of ROC curves to predict in-hospital mortality was the largest for max-lactate, whereas it was not different among all other lactate derived variables (all p > 0.05). The area under the ROC curves for admission lactate and lactate-time-integral was not different (p = 0.36). Conclusions Hyperlactatemia is associated with in-hospital mortality in a heterogeneous ICU population. In our patients, lactate peak values predicted in-hospital mortality equally well as lactate-time-integral of arterial blood lactate levels above the upper normal threshold. PMID:23446002

  15. Natural Variability and Anthropogenic Trends in the Ocean Carbon Sink

    NASA Astrophysics Data System (ADS)

    McKinley, Galen A.; Fay, Amanda R.; Lovenduski, Nicole S.; Pilcher, Darren J.

    2017-01-01

    Since preindustrial times, the ocean has removed from the atmosphere 41% of the carbon emitted by human industrial activities. Despite significant uncertainties, the balance of evidence indicates that the globally integrated rate of ocean carbon uptake is increasing in response to increasing atmospheric CO2 concentrations. The El Niño-Southern Oscillation in the equatorial Pacific dominates interannual variability of the globally integrated sink. Modes of climate variability in high latitudes are correlated with variability in regional carbon sinks, but mechanistic understanding is incomplete. Regional sink variability, combined with sparse sampling, means that the growing oceanic sink cannot yet be directly detected from available surface data. Accurate and precise shipboard observations need to be continued and increasingly complemented with autonomous observations. These data, together with a variety of mechanistic and diagnostic models, are needed for better understanding, long-term monitoring, and future projections of this critical climate regulation service.

  16. A Time Integration Algorithm Based on the State Transition Matrix for Structures with Time Varying and Nonlinear Properties

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2003-01-01

    A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.

  17. The NASA Lewis integrated propulsion and flight control simulator

    NASA Technical Reports Server (NTRS)

    Bright, Michelle M.; Simon, Donald L.

    1991-01-01

    A new flight simulation facility was developed at NASA-Lewis. The purpose of this flight simulator is to allow integrated propulsion control and flight control algorithm development and evaluation in real time. As a preliminary check of the simulator facility capabilities and correct integration of its components, the control design and physics models for a short take-off and vertical landing fighter aircraft model were shown, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The initial testing and evaluation results show that this fixed based flight simulator can provide real time feedback and display of both airframe and propulsion variables for validation of integrated flight and propulsion control systems. Additionally, through the use of this flight simulator, various control design methodologies and cockpit mechanizations can be tested and evaluated in a real time environment.

  18. Safety analytics for integrating crash frequency and real-time risk modeling for expressways.

    PubMed

    Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung

    2017-07-01

    To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Can Comparison of Contrastive Examples Facilitate Graph Understanding?

    ERIC Educational Resources Information Center

    Smith, Linsey A.; Gentner, Dedre

    2011-01-01

    The authors explore the role of comparison in improving graph fluency. The ability to use graphs fluently is crucial for STEM achievement, but graphs are challenging to interpret and produce because they often involve integration of multiple variables, continuous change in variables over time, and omission of certain details in order to highlight…

  20. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  1. TEMPORAL VARIABILITY MEASUREMENT OF SPECIFIC VOLATILE ORGANIC COMPOUNDS

    EPA Science Inventory

    Methodology was developed to determine unambiguously trace levels of volatile organic compounds as they vary in concentration over a variety of time scales. his capability is important because volatile organic compounds (VOCs) are usually measure by time-integrative techniques th...

  2. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  3. Quality by design case study: an integrated multivariate approach to drug product and process development.

    PubMed

    Huang, Jun; Kaul, Goldi; Cai, Chunsheng; Chatlapalli, Ramarao; Hernandez-Abad, Pedro; Ghosh, Krishnendu; Nagi, Arwinder

    2009-12-01

    To facilitate an in-depth process understanding, and offer opportunities for developing control strategies to ensure product quality, a combination of experimental design, optimization and multivariate techniques was integrated into the process development of a drug product. A process DOE was used to evaluate effects of the design factors on manufacturability and final product CQAs, and establish design space to ensure desired CQAs. Two types of analyses were performed to extract maximal information, DOE effect & response surface analysis and multivariate analysis (PCA and PLS). The DOE effect analysis was used to evaluate the interactions and effects of three design factors (water amount, wet massing time and lubrication time), on response variables (blend flow, compressibility and tablet dissolution). The design space was established by the combined use of DOE, optimization and multivariate analysis to ensure desired CQAs. Multivariate analysis of all variables from the DOE batches was conducted to study relationships between the variables and to evaluate the impact of material attributes/process parameters on manufacturability and final product CQAs. The integrated multivariate approach exemplifies application of QbD principles and tools to drug product and process development.

  4. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  5. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  6. Integrated landscape/hydrologic modeling tool for semiarid watersheds

    Treesearch

    Mariano Hernandez; Scott N. Miller

    2000-01-01

    An integrated hydrologic modeling/watershed assessment tool is being developed to aid in determining the susceptibility of semiarid landscapes to natural and human-induced changes across a range of scales. Watershed processes are by definition spatially distributed and are highly variable through time, and this approach is designed to account for their spatial and...

  7. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coimbra, Carlos F. M.

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior inmore » real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.« less

  8. Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Harman, Richard

    2004-01-01

    There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.

  9. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  10. PBO Integrated Real-Time Observing Sites at Volcanic Sites

    NASA Astrophysics Data System (ADS)

    Mencin, D.; Jackson, M.; Borsa, A.; Feaux, K.; Smith, S.

    2009-05-01

    The Plate Boundary Observatory, an element of NSF's EarthScope program, has six integrated observatories in Yellowstone and four on Mt St Helens. These observatories consist of some combination of borehole strainmeters, borehole seismometers, GPS, tiltmeters, pore pressure, thermal measurements and meteorological data. Data from all these instruments have highly variable data rates and formats, all synchronized to GPS time which can cause significant congestion of precious communication resources. PBO has been experimenting with integrating these data streams to both maximize efficiency and minimize latency through the use of software that combines the streams, like Antelope, and VPN technologies.

  11. Nonlinear stratospheric variability: multifractal de-trended fluctuation analysis and singularity spectra

    PubMed Central

    Domeisen, Daniela I. V.

    2016-01-01

    Characterizing the stratosphere as a turbulent system, temporal fluctuations often show different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. In this study, the different scaling laws in the long-term stratospheric variability are studied using multifractal de-trended fluctuation analysis (MF-DFA). The analysis is performed comparing four re-analysis products and different realizations of an idealized numerical model, isolating the role of topographic forcing and seasonal variability, as well as the absence of climate teleconnections and small-scale forcing. The Northern Hemisphere (NH) shows a transition of scaling exponents for time scales shorter than about 1 year, for which the variability is multifractal and scales in time with a power law corresponding to a red spectrum, to longer time scales, for which the variability is monofractal and scales in time with a power law corresponding to white noise. Southern Hemisphere (SH) variability also shows a transition at annual scales. The SH also shows a narrower dynamical range in multifractality than the NH, as seen in the generalized Hurst exponent and in the singularity spectra. The numerical integrations show that the models are able to reproduce the low-frequency variability but are not able to fully capture the shorter term variability of the stratosphere. PMID:27493560

  12. On Fitting a Multivariate Two-Part Latent Growth Model

    PubMed Central

    Xu, Shu; Blozis, Shelley A.; Vandewater, Elizabeth A.

    2017-01-01

    A 2-part latent growth model can be used to analyze semicontinuous data to simultaneously study change in the probability that an individual engages in a behavior, and if engaged, change in the behavior. This article uses a Monte Carlo (MC) integration algorithm to study the interrelationships between the growth factors of 2 variables measured longitudinally where each variable can follow a 2-part latent growth model. A SAS macro implementing Mplus is developed to estimate the model to take into account the sampling uncertainty of this simulation-based computational approach. A sample of time-use data is used to show how maximum likelihood estimates can be obtained using a rectangular numerical integration method and an MC integration method. PMID:29333054

  13. First-principles X-ray absorption dose calculation for time-dependent mass and optical density.

    PubMed

    Berejnov, Viatcheslav; Rubinstein, Boris; Melo, Lis G A; Hitchcock, Adam P

    2018-05-01

    A dose integral of time-dependent X-ray absorption under conditions of variable photon energy and changing sample mass is derived from first principles starting with the Beer-Lambert (BL) absorption model. For a given photon energy the BL dose integral D(e, t) reduces to the product of an effective time integral T(t) and a dose rate R(e). Two approximations of the time-dependent optical density, i.e. exponential A(t) = c + aexp(-bt) for first-order kinetics and hyperbolic A(t) = c + a/(b + t) for second-order kinetics, were considered for BL dose evaluation. For both models three methods of evaluating the effective time integral are considered: analytical integration, approximation by a function, and calculation of the asymptotic behaviour at large times. Data for poly(methyl methacrylate) and perfluorosulfonic acid polymers measured by scanning transmission soft X-ray microscopy were used to test the BL dose calculation. It was found that a previous method to calculate time-dependent dose underestimates the dose in mass loss situations, depending on the applied exposure time. All these methods here show that the BL dose is proportional to the exposure time D(e, t) ≃ K(e)t.

  14. Comparison of Personal, Social and Academic Variables Related to University Drop-out and Persistence.

    PubMed

    Bernardo, Ana; Esteban, María; Fernández, Estrella; Cervero, Antonio; Tuero, Ellián; Solano, Paula

    2016-01-01

    Dropping out of university has serious consequences not only for the student who drops out but also for the institution and society as a whole. Although this phenomenon has been widely studied, there is a need for broader knowledge of the context in which it occurs. Yet research on the subject often focuses on variables that, although they affect drop-out rates, lie beyond a university's control. This makes it hard to come up with effective preventive measures. That is why a northern Spanish university has undertaken a ex post facto holistic research study on 1,311 freshmen (2008/9, 2009/10, and 2010/11 cohorts). The study falls within the framework of the ALFA-GUIA European Project and focuses on those drop-out factors where there is scope for taking remedial measures. This research explored the possible relationship of degree drop-out and different categories of variables: variables related to the educational stage prior to university entry (path to entry university and main reason for degree choice), variables related to integration and coexistence at university (social integration, academic integration, relationships with teachers/peers and value of the living environment) financial status and performance during university studies (in terms of compliance with the program, time devoted to study, use of study techniques and class attendance). Descriptive, correlational and variance analyses were conducted to discover which of these variables really distinguish those students who drop-out from their peers who complete their studies. Results highlight the influence of vocation as main reason for degree choice, path to university entry, financial independency, social and academic adaptation, time devoted to study, use of study techniques and program compliance in the studied phenomenon.

  15. Comparison of Personal, Social and Academic Variables Related to University Drop-out and Persistence

    PubMed Central

    Bernardo, Ana; Esteban, María; Fernández, Estrella; Cervero, Antonio; Tuero, Ellián; Solano, Paula

    2016-01-01

    Dropping out of university has serious consequences not only for the student who drops out but also for the institution and society as a whole. Although this phenomenon has been widely studied, there is a need for broader knowledge of the context in which it occurs. Yet research on the subject often focuses on variables that, although they affect drop-out rates, lie beyond a university’s control. This makes it hard to come up with effective preventive measures. That is why a northern Spanish university has undertaken a ex post facto holistic research study on 1,311 freshmen (2008/9, 2009/10, and 2010/11 cohorts). The study falls within the framework of the ALFA-GUIA European Project and focuses on those drop-out factors where there is scope for taking remedial measures. This research explored the possible relationship of degree drop-out and different categories of variables: variables related to the educational stage prior to university entry (path to entry university and main reason for degree choice), variables related to integration and coexistence at university (social integration, academic integration, relationships with teachers/peers and value of the living environment) financial status and performance during university studies (in terms of compliance with the program, time devoted to study, use of study techniques and class attendance). Descriptive, correlational and variance analyses were conducted to discover which of these variables really distinguish those students who drop-out from their peers who complete their studies. Results highlight the influence of vocation as main reason for degree choice, path to university entry, financial independency, social and academic adaptation, time devoted to study, use of study techniques and program compliance in the studied phenomenon. PMID:27803684

  16. Exploring the Associations Among Nutrition, Science, and Mathematics Knowledge for an Integrative, Food-Based Curriculum.

    PubMed

    Stage, Virginia C; Kolasa, Kathryn M; Díaz, Sebastián R; Duffrin, Melani W

    2018-01-01

    Explore associations between nutrition, science, and mathematics knowledge to provide evidence that integrating food/nutrition education in the fourth-grade curriculum may support gains in academic knowledge. Secondary analysis of a quasi-experimental study. Sample included 438 students in 34 fourth-grade classrooms across North Carolina and Ohio; mean age 10 years old; gender (I = 53.2% female; C = 51.6% female). Dependent variable = post-test-nutrition knowledge; independent variables = baseline-nutrition knowledge, and post-test science and mathematics knowledge. Analyses included descriptive statistics and multiple linear regression. The hypothesized model predicted post-nutrition knowledge (F(437) = 149.4, p < .001; Adjusted R = .51). All independent variables were significant predictors with positive association. Science and mathematics knowledge were predictive of nutrition knowledge indicating use of an integrative science and mathematics curriculum to improve academic knowledge may also simultaneously improve nutrition knowledge among fourth-grade students. Teachers can benefit from integration by meeting multiple academic standards, efficiently using limited classroom time, and increasing nutrition education provided in the classroom. © 2018, American School Health Association.

  17. Using Conventional Hydropower to Help Alleviate Variable Resource Grid Integration Challenges in the Western U.S

    NASA Astrophysics Data System (ADS)

    Veselka, T. D.; Poch, L.

    2011-12-01

    Integrating high penetration levels of wind and solar energy resources into the power grid is a formidable challenge in virtually all interconnected systems due to the fact that supply and demand must remain in balance at all times. Since large scale electricity storage is currently not economically viable, generation must exactly match electricity demand plus energy losses in the system as time unfolds. Therefore, as generation from variable resources such as wind and solar fluctuate, production from generating resources that are easier to control and dispatch need to compensate for these fluctuations while at the same time respond to both instantaneous change in load and follow daily load profiles. The grid in the Western U.S. is not exempt to grid integration challenges associated with variable resources. However, one advantage that the power system in the Western U.S. has over many other regional power systems is that its footprint contains an abundance of hydropower resources. Hydropower plants, especially those that have reservoir water storage, can physically change electricity production levels very quickly both via a dispatcher and through automatic generation control. Since hydropower response time is typically much faster than other dispatchable resources such as steam or gas turbines, it is well suited to alleviate variable resource grid integration issues. However, despite an abundance of hydropower resources and the current low penetration of variable resources in the Western U.S., problems have already surfaced. This spring in the Pacific Northwest, wetter than normal hydropower conditions in combination with transmission constraints resulted in controversial wind resource shedding. This action was taken since water spilling would have increased dissolved oxygen levels downstream of dams thereby significantly degrading fish habitats. The extent to which hydropower resources will be able to contribute toward a stable and reliable Western grid is currently being studied. Typically these studies consider the inherent flexibility of hydropower technologies, but tend to fall short on details regarding grid operations, institutional arrangements, and hydropower environmental regulations. This presentation will focus on an analysis that Argonne National Laboratory is conducting in collaboration with the Western Area Power Administration (Western). The analysis evaluates the extent to which Western's hydropower resources may help with grid integration challenges via a proposed Energy Imbalance Market. This market encompasses most of the Western Electricity Coordinating Council footprint. It changes grid operations such that the real-time dispatch would be, in part, based on a 5-minute electricity market. The analysis includes many factors such as site-specific environmental considerations at each of its hydropower facilities, long-term firm purchase agreements, and hydropower operating objectives and goals. Results of the analysis indicate that site-specific details significantly affect the ability of hydropower plant to respond to grid needs in a future which will have a high penetration of variable resources.

  18. Wheels within Wheels: Hamiltonian Dynamics as a Hierarchy of Action Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, Rory J.; Bellan, Paul M.

    2010-09-17

    In systems where one coordinate undergoes periodic oscillation, the net displacement in any other coordinate over a single period is shown to be given by differentiation of the action integral associated with the oscillating coordinate. This result is then used to demonstrate that the action integral acts as a Hamiltonian for slow coordinates providing time is scaled to the 'tick time' of the oscillating coordinate. Numerous examples, including charged particle drifts and relativistic motion, are supplied to illustrate the varied application of these results.

  19. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  20. Combining information from 3 anatomic regions in the diagnosis of glaucoma with time-domain optical coherence tomography.

    PubMed

    Wang, Mingwu; Lu, Ake Tzu-Hui; Varma, Rohit; Schuman, Joel S; Greenfield, David S; Huang, David

    2014-03-01

    To improve the diagnosis of glaucoma by combining time-domain optical coherence tomography (TD-OCT) measurements of the optic disc, circumpapillary retinal nerve fiber layer (RNFL), and macular retinal thickness. Ninety-six age-matched normal and 96 perimetric glaucoma participants were included in this observational, cross-sectional study. Or-logic, support vector machine, relevance vector machine, and linear discrimination function were used to analyze the performances of combined TD-OCT diagnostic variables. The area under the receiver-operating curve (AROC) was used to evaluate the diagnostic accuracy and to compare the diagnostic performance of single and combined anatomic variables. The best RNFL thickness variables were the inferior (AROC=0.900), overall (AROC=0.892), and superior quadrants (AROC=0.850). The best optic disc variables were horizontal integrated rim width (AROC=0.909), vertical integrated rim area (AROC=0.908), and cup/disc vertical ratio (AROC=0.890). All macular retinal thickness variables had AROCs of 0.829 or less. Combining the top 3 RNFL and optic disc variables in optimizing glaucoma diagnosis, support vector machine had the highest AROC, 0.954, followed by or-logic (AROC=0.946), linear discrimination function (AROC=0.946), and relevance vector machine (AROC=0.943). All combination diagnostic variables had significantly larger AROCs than any single diagnostic variable. There are no significant differences among the combination diagnostic indices. With TD-OCT, RNFL and optic disc variables had better diagnostic accuracy than macular retinal variables. Combining top RNFL and optic disc variables significantly improved diagnostic performance. Clinically, or-logic classification was the most practical analytical tool with sufficient accuracy to diagnose early glaucoma.

  1. An Integrated Probabilistic-Fuzzy Assessment of Uncertainty Associated with Human Health Risk to MSW Landfill Leachate Contamination

    NASA Astrophysics Data System (ADS)

    Mishra, H.; Karmakar, S.; Kumar, R.

    2016-12-01

    Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.

  2. The finite element method scheme for a solution of an evolution variational inequality with a nonlocal space operator

    NASA Astrophysics Data System (ADS)

    Glazyrina, O. V.; Pavlova, M. F.

    2016-11-01

    We consider the parabolic inequality with monotone with respect to a gradient space operator, which is depended on integral with respect to space variables solution characteristic. We construct a two-layer differential scheme for this problem with use of penalty method, semidiscretization with respect to time variable method and the finite element method (FEM) with respect to space variables. We proved a convergence of constructed mothod.

  3. Grid Modeling Tools | Grid Modernization | NREL

    Science.gov Websites

    integrates primary frequency response (turbine governor control) with secondary frequency response (automatic generation control). It simulates the power system dynamic response in full time spectrum with variable time control model places special emphasis on electric power systems with high penetrations of renewable

  4. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; hide

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  5. Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT

    PubMed Central

    Wimmer, Klaus; Compte, Albert; Roxin, Alex; Peixoto, Diogo; Renart, Alfonso; de la Rocha, Jaime

    2015-01-01

    Neuronal variability in sensory cortex predicts perceptual decisions. This relationship, termed choice probability (CP), can arise from sensory variability biasing behaviour and from top-down signals reflecting behaviour. To investigate the interaction of these mechanisms during the decision-making process, we use a hierarchical network model composed of reciprocally connected sensory and integration circuits. Consistent with monkey behaviour in a fixed-duration motion discrimination task, the model integrates sensory evidence transiently, giving rise to a decaying bottom-up CP component. However, the dynamics of the hierarchical loop recruits a concurrently rising top-down component, resulting in sustained CP. We compute the CP time-course of neurons in the medial temporal area (MT) and find an early transient component and a separate late contribution reflecting decision build-up. The stability of individual CPs and the dynamics of noise correlations further support this decomposition. Our model provides a unified understanding of the circuit dynamics linking neural and behavioural variability. PMID:25649611

  6. Comparative study of flare control laws. [optimal control of b-737 aircraft approach and landing

    NASA Technical Reports Server (NTRS)

    Nadkarni, A. A.; Breedlove, W. J., Jr.

    1979-01-01

    A digital 3-D automatic control law was developed to achieve an optimal transition of a B-737 aircraft between various initial glid slope conditions and the desired final touchdown condition. A discrete, time-invariant, optimal, closed-loop control law presented for a linear regulator problem, was extended to include a system being acted upon by a constant disturbance. Two forms of control laws were derived to solve this problem. One method utilized the feedback of integral states defined appropriately and augmented with the original system equations. The second method formulated the problem as a control variable constraint, and the control variables were augmented with the original system. The control variable constraint control law yielded a better performance compared to feedback control law for the integral states chosen.

  7. IGR J16318-4848: 7 Years of INTEGRAL Observations

    NASA Technical Reports Server (NTRS)

    Barragan, Laura; Wilms, Joern; kreykenbohm, Ingo; Hanke, manfred; Fuerst, Felix; Pottschmidt, Katja; Rothschild, Richard

    2011-01-01

    Since the discovery of IGR 116318-4848 in 2003 January, INTEGRAL has accumulated more than 5.8 Ms in IBIS/ISGRI. We present the first extensive analysis of the archival INTEGRAL data (IBIS/ISGRI, and JEM-X when available) for this source, together with the observations carried out by XMM-Newton (twice in 2003, and twice in 2004) and Suzaku (2006). The source is very variable in the long-term, with periods of low activity, where the source is almost not detected, and flares with a luminosity approximately 10 times greater than its average value (5.4 cts/s). IGR 116318-4848 is a HMXB containing a sgB[e] star and a compact object (most probably a neutron star) deeply embedded in the stellar wind of the mass donor. The variability of the source (also in the short-term) can be ascribed to the wind of the optical star being very clumpy. We study the variation of the spectral parameters in time scales of INTEGRAL revolutions. The photoelectric absorption is, with NH around 10(exp 24)/ square cm, unusually high. During brighter phases the strong K-alpha iron line known from XMM-Newton and Suzaku observations is also detectable with the JEM-X instrument.

  8. Automated Real-Time Behavioral and Physiological Data Acquisition and Display Integrated with Stimulus Presentation for fMRI

    PubMed Central

    Voyvodic, James T.; Glover, Gary H.; Greve, Douglas; Gadde, Syam

    2011-01-01

    Functional magnetic resonance imaging (fMRI) is based on correlating blood oxygen-level dependent (BOLD) signal fluctuations in the brain with other time-varying signals. Although the most common reference for correlation is the timing of a behavioral task performed during the scan, many other behavioral and physiological variables can also influence fMRI signals. Variations in cardiac and respiratory functions in particular are known to contribute significant BOLD signal fluctuations. Variables such as skin conduction, eye movements, and other measures that may be relevant to task performance can also be correlated with BOLD signals and can therefore be used in image analysis to differentiate multiple components in complex brain activity signals. Combining real-time recording and data management of multiple behavioral and physiological signals in a way that can be routinely used with any task stimulus paradigm is a non-trivial software design problem. Here we discuss software methods that allow users control of paradigm-specific audio–visual or other task stimuli combined with automated simultaneous recording of multi-channel behavioral and physiological response variables, all synchronized with sub-millisecond temporal accuracy. We also discuss the implementation and importance of real-time display feedback to ensure data quality of all recorded variables. Finally, we discuss standards and formats for storage of temporal covariate data and its integration into fMRI image analysis. These neuroinformatics methods have been adopted for behavioral task control at all sites in the Functional Biomedical Informatics Research Network (FBIRN) multi-center fMRI study. PMID:22232596

  9. Integrating space and time: A case for phenological context in grazing studies and management

    USDA-ARS?s Scientific Manuscript database

    In water-limited landscapes, patterns in primary production are highly variable across space and time. Livestock grazing is a common agricultural practice worldwide and a concern is localized overuse of specific pasture resources that can exacerbate grass losses and soil erosion. On a research ranch...

  10. A recurrent network mechanism of time integration in perceptual decisions.

    PubMed

    Wong, Kong-Fatt; Wang, Xiao-Jing

    2006-01-25

    Recent physiological studies using behaving monkeys revealed that, in a two-alternative forced-choice visual motion discrimination task, reaction time was correlated with ramping of spike activity of lateral intraparietal cortical neurons. The ramping activity appears to reflect temporal accumulation, on a timescale of hundreds of milliseconds, of sensory evidence before a decision is reached. To elucidate the cellular and circuit basis of such integration times, we developed and investigated a simplified two-variable version of a biophysically realistic cortical network model of decision making. In this model, slow time integration can be achieved robustly if excitatory reverberation is primarily mediated by NMDA receptors; our model with only fast AMPA receptors at recurrent synapses produces decision times that are not comparable with experimental observations. Moreover, we found two distinct modes of network behavior, in which decision computation by winner-take-all competition is instantiated with or without attractor states for working memory. Decision process is closely linked to the local dynamics, in the "decision space" of the system, in the vicinity of an unstable saddle steady state that separates the basins of attraction for the two alternative choices. This picture provides a rigorous and quantitative explanation for the dependence of performance and response time on the degree of task difficulty, and the reason for which reaction times are longer in error trials than in correct trials as observed in the monkey experiment. Our reduced two-variable neural model offers a simple yet biophysically plausible framework for studying perceptual decision making in general.

  11. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  12. Information transmission and detection thresholds in the vestibular nuclei: single neurons vs. population encoding

    PubMed Central

    Massot, Corentin; Chacron, Maurice J.

    2011-01-01

    Understanding how sensory neurons transmit information about relevant stimuli remains a major goal in neuroscience. Of particular relevance are the roles of neural variability and spike timing in neural coding. Peripheral vestibular afferents display differential variability that is correlated with the importance of spike timing; regular afferents display little variability and use a timing code to transmit information about sensory input. Irregular afferents, conversely, display greater variability and instead use a rate code. We studied how central neurons within the vestibular nuclei integrate information from both afferent classes by recording from a group of neurons termed vestibular only (VO) that are known to make contributions to vestibulospinal reflexes and project to higher-order centers. We found that, although individual central neurons had sensitivities that were greater than or equal to those of individual afferents, they transmitted less information. In addition, their velocity detection thresholds were significantly greater than those of individual afferents. This is because VO neurons display greater variability, which is detrimental to information transmission and signal detection. Combining activities from multiple VO neurons increased information transmission. However, the information rates were still much lower than those of equivalent afferent populations. Furthermore, combining responses from multiple VO neurons led to lower velocity detection threshold values approaching those measured from behavior (∼2.5 vs. 0.5–1°/s). Our results suggest that the detailed time course of vestibular stimuli encoded by afferents is not transmitted by VO neurons. Instead, they suggest that higher vestibular pathways must integrate information from central vestibular neuron populations to give rise to behaviorally observed detection thresholds. PMID:21307329

  13. A new interpolation method for gridded extensive variables with application in Lagrangian transport and dispersion models

    NASA Astrophysics Data System (ADS)

    Hittmeir, Sabine; Philipp, Anne; Seibert, Petra

    2017-04-01

    In discretised form, an extensive variable usually represents an integral over a 3-dimensional (x,y,z) grid cell. In the case of vertical fluxes, gridded values represent integrals over a horizontal (x,y) grid face. In meteorological models, fluxes (precipitation, turbulent fluxes, etc.) are usually written out as temporally integrated values, thus effectively forming 3D (x,y,t) integrals. Lagrangian transport models require interpolation of all relevant variables towards the location in 4D space of each of the computational particles. Trivial interpolation algorithms usually implicitly assume the integral value to be a point value valid at the grid centre. If the integral value would be reconstructed from the interpolated point values, it would in general not be correct. If nonlinear interpolation methods are used, non-negativity cannot easily be ensured. This problem became obvious with respect to the interpolation of precipitation for the calculation of wet deposition FLEXPART (http://flexpart.eu) which uses ECMWF model output or other gridded input data. The presently implemented method consists of a special preprocessing in the input preparation software and subsequent linear interpolation in the model. The interpolated values are positive but the criterion of cell-wise conservation of the integral property is violated; it is also not very accurate as it smoothes the field. A new interpolation algorithm was developed which introduces additional supporting grid points in each time interval with linear interpolation to be applied in FLEXPART later between them. It preserves the integral precipitation in each time interval, guarantees the continuity of the time series, and maintains non-negativity. The function values of the remapping algorithm at these subgrid points constitute the degrees of freedom which can be prescribed in various ways. Combining the advantages of different approaches leads to a final algorithm respecting all the required conditions. To improve the monotonicity behaviour we additionally derived a filter to restrict over- or undershooting. At the current stage, the algorithm is meant primarily for the temporal dimension. It can also be applied with operator-splitting to include the two horizontal dimensions. An extension to 2D appears feasible, while a fully 3D version would most likely not justify the effort compared to the operator-splitting approach.

  14. Diagnosis of extratropical variability in seasonal integrations of the ECMWF model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferranti, L.; Molteni, F.; Brankovic, C.

    1994-06-01

    Properties of the general circulation simulated by the ECMWF model are discussed using a set of seasonal integrations at T63 resolution. For each season, over the period of 5 years, 1986-1990, three integrations initiated on consecutive days were run with prescribed observed sea surface temperature (SST). This paper presents a series of diagnostics of extratropical variability in the model, with particular emphasis on the northern winter. Time-filtered maps of variability indicate that in this season there is insufficient storm track activity penetrating into the Eurasian continent. Related to this the maximum of lower-frequency variability for northern spring are more realistic.more » Blocking is defined objectively in terms of the geostrophic wind at 500 mb. Consistent with the low-frequency transience, in the Euro-Atlantic sector the position of maximum blocking in the model is displaced eastward. The composite structure of blocks over the Pacific is realistic, though their frequency is severely underestimated at all times of year. Shortcomings in the simulated wintertime general circulation were also revealed by studying the projection of 5-day mean fields onto empirical orthogonal functions (EOFs) of the observed flow. The largest differences were apparent for statistics of EOFs of the zonal mean flow. Analysis of weather regime activity, defined from the EOFs, suggested that regimes with positive PNA index were overpopulated, while the negative PNA regimes were underpopulated. A further comparison between observed and modeled low-frequency variance revealed that underestimation of low-frequency variability occurs along the same axes that explain most of the spatial structure of the error in the mean field, suggesting a common dynamical origin for these two aspects of the systematic error. 17 refs., 17 figs., 4 tabs.« less

  15. Influence of Microphysical Variability on Stochastic Condensation in Turbulent Clouds

    NASA Astrophysics Data System (ADS)

    Desai, N.; Chandrakar, K. K.; Chang, K.; Glienke, S.; Cantrell, W. H.; Fugal, J. P.; Shaw, R. A.

    2017-12-01

    We investigate the influence of variability in droplet number concentration and radius on the evolution of cloud droplet size distributions. Measurements are made on the centimeter scale using digitial inline holography, both in a controlled laboratory setting and in the field using HOLODEC measurements from CSET. We created steady state cloud conditions in the laboratory Pi Chamber, in which a turbulent cloud can be sustained for long periods of time. Using holographic imaging, we directly observe the variations in local number concentration and droplet size distribution and, thereby, the integral radius. We interpret the measurements in the context of stochastic condensation theory to determine how fluctuations in integral radius contribute to droplet growth. We find that the variability in integral radius is primarily driven by variations in the droplet number concentration and not the droplet radius. This variability does not contribute significantly to the mean droplet growth rate, but contributes significantly to the rate of increase of the size distribution width. We compare these results with in-situ measurements and find evidence for microphysical signatures of stochastic condensation. The results suggest that supersaturation fluctuations lead to broader size distributions and allow droplets to reach the collision-coalescence stage.

  16. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  17. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE PAGES

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    2017-10-17

    For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  18. On solitons: the biomolecular nonlinear transmission line models with constant and time variable coefficients

    NASA Astrophysics Data System (ADS)

    Raza, Nauman; Murtaza, Isma Ghulam; Sial, Sultan; Younis, Muhammad

    2018-07-01

    The article studies the dynamics of solitons in electrical microtubule ? model, which describes the propagation of waves in nonlinear dynamical system. Microtubules are not only a passive support of a cell but also they have highly dynamic structures involved in cell motility, intracellular transport and signaling. The underlying model has been considered with constant and variable coefficients of time function. The solitary wave ansatz has been applied successfully to extract these solitons. The corresponding integrability criteria, also known as constraint conditions, naturally emerge from the analysis of these models.

  19. Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology

    DTIC Science & Technology

    1996-01-01

    feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1

  20. The dynamics of multimodal integration: The averaging diffusion model.

    PubMed

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  1. Incorporation of feedback during beat synchronization is an index of neural maturation and reading skills.

    PubMed

    Woodruff Carr, Kali; Fitzroy, Ahren B; Tierney, Adam; White-Schwoch, Travis; Kraus, Nina

    2017-01-01

    Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents' synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub-skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing-based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing-based language learning impairments. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Within-and among-year germination in Sonoran Desert winter annuals: bet hedging and predictive germination in a variable environment.

    PubMed

    Gremer, Jennifer R; Kimball, Sarah; Venable, D Lawrence

    2016-10-01

    In variable environments, organisms must have strategies to ensure fitness as conditions change. For plants, germination can time emergence with favourable conditions for later growth and reproduction (predictive germination), spread the risk of unfavourable conditions (bet hedging) or both (integrated strategies). Here we explored the adaptive value of within- and among-year germination timing for 12 species of Sonoran Desert winter annual plants. We parameterised models with long-term demographic data to predict optimal germination fractions and compared them to observed germination. At both temporal scales we found that bet hedging is beneficial and that predicted optimal strategies corresponded well with observed germination. We also found substantial fitness benefits to varying germination timing, suggesting some degree of predictive germination in nature. However, predictive germination was imperfect, calling for some degree of bet hedging. Together, our results suggest that desert winter annuals have integrated strategies combining both predictive plasticity and bet hedging. © 2016 John Wiley & Sons Ltd/CNRS.

  3. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  4. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  5. Optimizing Utilization of Detectors

    DTIC Science & Technology

    2016-03-01

    provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This

  6. Spatial gradients and multidimensional dynamics in a neural integrator circuit

    PubMed Central

    Miri, Andrew; Daie, Kayvon; Arrenberg, Aristides B.; Baier, Herwig; Aksay, Emre; Tank, David W.

    2011-01-01

    In a neural integrator, the variability and topographical organization of neuronal firing rate persistence can provide information about the circuit’s functional architecture. Here we use optical recording to measure the time constant of decay of persistent firing (“persistence time”) across a population of neurons comprising the larval zebrafish oculomotor velocity-to-position neural integrator. We find extensive persistence time variation (10-fold; coefficients of variation 0.58–1.20) across cells within individual larvae. We also find that the similarity in firing between two neurons decreased as the distance between them increased and that a gradient in persistence time was mapped along the rostrocaudal and dorsoventral axes. This topography is consistent with the emergence of persistence time heterogeneity from a circuit architecture in which nearby neurons are more strongly interconnected than distant ones. Collectively, our results can be accounted for by integrator circuit models characterized by multiple dimensions of slow firing rate dynamics. PMID:21857656

  7. The partition function of the Bures ensemble as the τ-function of BKP and DKP hierarchies: continuous and discrete

    NASA Astrophysics Data System (ADS)

    Hu, Xing-Biao; Li, Shi-Hao

    2017-07-01

    The relationship between matrix integrals and integrable systems was revealed more than 20 years ago. As is known, matrix integrals over a Gaussian ensemble used in random matrix theory could act as the τ-function of several hierarchies of integrable systems. In this article, we will show that the time-dependent partition function of the Bures ensemble, whose measure has many interesting geometric properties, could act as the τ-function of BKP and DKP hierarchies. In addition, if discrete time variables are introduced, then this partition function could act as the τ-function of discrete BKP and DKP hierarchies. In particular, there are some links between the partition function of the Bures ensemble and Toda-type equations.

  8. 75 FR 4316 - Integration of Variable Energy Resources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-27

    ... their supply in the real-time energy markets (i.e., act as a price taker). Because day-ahead schedules... conditions and that day-ahead and real-time energy prices will converge under the scenario of increasing... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission 18 CFR Chapter I [Docket No. RM10-11-000...

  9. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  10. Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd

    2018-03-01

    Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.

  11. Variability of OH(3-1) and OH(6-2) emission altitude and volume emission rate from 2003 to 2011

    NASA Astrophysics Data System (ADS)

    Teiser, Georg; von Savigny, Christian

    2017-08-01

    In this study we report on variability in emission rate and centroid emission altitude of the OH(3-1) and OH(6-2) Meinel bands in the terrestrial nightglow based on spaceborne nightglow measurements with the SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) instrument on the Envisat satellite. The SCIAMACHY observations cover the time period from August 2002 to April 2012 and the nighttime observations used in this study are performed at 10:00 p.m. local solar time. Characterizing variability in OH emission altitude - particularly potential long-term variations - is important for an appropriate interpretation of ground-based OH rotational temperature measurements, because simultaneous observations of the vertical OH volume emission rate profile are usually not available for these measurements. OH emission altitude and vertically integrated emission rate time series with daily resolution for the OH(3-1) band and monthly resolution for the OH(6-2) band were analyzed using a standard multilinear regression approach allowing for seasonal variations, QBO-effects (Quasi-Biennial Oscillation), solar cycle (SC) variability and a linear long-term trend. The analysis focuses on low latitudes, where SCIAMACHY nighttime observations are available all year. The dominant sources of variability for both OH emission rate and altitude are the semi-annual and annual variations, with emission rate and altitude being highly anti-correlated. There is some evidence for a 11-year solar cycle signature in the vertically integrated emission rate and in the centroid emission altitude of both the OH(3-1) and OH(6-2) bands.

  12. Differential effects of two types of formative assessment in predicting performance of first-year medical students.

    PubMed

    Krasne, Sally; Wimmers, Paul F; Relan, Anju; Drake, Thomas A

    2006-05-01

    Formative assessments are systematically designed instructional interventions to assess and provide feedback on students' strengths and weaknesses in the course of teaching and learning. Despite their known benefits to student attitudes and learning, medical school curricula have been slow to integrate such assessments into the curriculum. This study investigates how performance on two different modes of formative assessment relate to each other and to performance on summative assessments in an integrated, medical-school environment. Two types of formative assessment were administered to 146 first-year medical students each week over 8 weeks: a timed, closed-book component to assess factual recall and image recognition, and an un-timed, open-book component to assess higher order reasoning including the ability to identify and access appropriate resources and to integrate and apply knowledge. Analogous summative assessments were administered in the ninth week. Models relating formative and summative assessment performance were tested using Structural Equation Modeling. Two latent variables underlying achievement on formative and summative assessments could be identified; a "formative-assessment factor" and a "summative-assessment factor," with the former predicting the latter. A latent variable underlying achievement on open-book formative assessments was highly predictive of achievement on both open- and closed-book summative assessments, whereas a latent variable underlying closed-book assessments only predicted performance on the closed-book summative assessment. Formative assessments can be used as effective predictive tools of summative performance in medical school. Open-book, un-timed assessments of higher order processes appeared to be better predictors of overall summative performance than closed-book, timed assessments of factual recall and image recognition.

  13. Separation of variables in the special diagonal Hamilton-Jacobi equation: Application to the dynamical problem of a particle constrained on a moving surface

    NASA Technical Reports Server (NTRS)

    Blanchard, D. L.; Chan, F. K.

    1973-01-01

    For a time-dependent, n-dimensional, special diagonal Hamilton-Jacobi equation a necessary and sufficient condition for the separation of variables to yield a complete integral of the form was established by specifying the admissible forms in terms of arbitrary functions. A complete integral was then expressed in terms of these arbitrary functions and also the n irreducible constants. As an application of the results obtained for the two-dimensional Hamilton-Jacobi equation, analysis was made for a comparatively wide class of dynamical problems involving a particle moving in Euclidean three-dimensional space under the action of external forces but constrained on a moving surface. All the possible cases in which this equation had a complete integral of the form were obtained and these are tubulated for reference.

  14. Variables affecting the academic and social integration of nursing students.

    PubMed

    Zeitlin-Ophir, Iris; Melitz, Osnat; Miller, Rina; Podoshin, Pia; Mesh, Gustavo

    2004-07-01

    This study attempted to analyze the variables that influence the academic integration of nursing students. The theoretical model presented by Leigler was adapted to the existing conditions in a school of nursing in northern Israel. The independent variables included the student's background; amount of support received in the course of studies; extent of outside family and social commitments; satisfaction with the school's facilities and services; and level of social integration. The dependent variable was the student's level of academic integration. The findings substantiated four central hypotheses, with the study model explaining approximately 45% of the variance in the dependent variable. Academic integration is influenced by a number of variables, the most prominent of which is the social integration of the student with colleagues and educational staff. Among the background variables, country of origin was found to be significant to both social and academic integration for two main groups in the sample: Israeli-born students (both Jewish and Arab) and immigrant students.

  15. Integrating Safety in Developing a Variable Speed Limit System

    DOT National Transportation Integrated Search

    2014-01-01

    Disaggregate safety studies benefit from the reliable surveillance systems which provide detailed real-time traffic and weather data. This information could help in capturing microlevel influences of the hazardous factors which might lead to a crash....

  16. Effects of Uncertainty in TRMM Precipitation Radar Path Integrated Attenuation on Interannual Variations of Tropical Oceanic Rainfall

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Fitzjarrald, Dan E.; Kummerow, Christian D.; Arnold, James E. (Technical Monitor)

    2002-01-01

    Considerable uncertainty surrounds the issue of whether precipitation over the tropical oceans (30 deg N/S) systematically changes with interannual sea-surface temperature (SST) anomalies that accompany El Nino (warm) and La Nina (cold) events. Time series of rainfall estimates from the Tropical Rainfall Measuring Mission (TRMM Precipitation Radar (PR) over the tropical oceans show marked differences with estimates from two TRMM Microwave Imager (TMI) passive microwave algorithms. We show that path-integrated attenuation derived from the effects of precipitation on the radar return from the ocean surface exhibits interannual variability that agrees closely with the TMI time series. Further analysis of the frequency distribution of PR (2A25 product) rain rates suggests that the algorithm incorporates the attenuation measurement in a very conservative fashion so as to optimize the instantaneous rain rates. Such an optimization appears to come at the expense of monitoring interannual climate variability.

  17. Some Exact Results for the Schroedinger Wave Equation with a Time Dependent Potential

    NASA Technical Reports Server (NTRS)

    Campbell, Joel

    2009-01-01

    The time dependent Schroedinger equation with a time dependent delta function potential is solved exactly for many special cases. In all other cases the problem can be reduced to an integral equation of the Volterra type. It is shown that by knowing the wave function at the origin, one may derive the wave function everywhere. Thus, the problem is reduced from a PDE in two variables to an integral equation in one. These results are used to compare adiabatic versus sudden changes in the potential. It is shown that adiabatic changes in the p otential lead to conservation of the normalization of the probability density.

  18. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    PubMed Central

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  19. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.

    PubMed

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.

  20. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  1. An Integrated Optimization Design Method Based on Surrogate Modeling Applied to Diverging Duct Design

    NASA Astrophysics Data System (ADS)

    Hanan, Lu; Qiushi, Li; Shaobin, Li

    2016-12-01

    This paper presents an integrated optimization design method in which uniform design, response surface methodology and genetic algorithm are used in combination. In detail, uniform design is used to select the experimental sampling points in the experimental domain and the system performance is evaluated by means of computational fluid dynamics to construct a database. After that, response surface methodology is employed to generate a surrogate mathematical model relating the optimization objective and the design variables. Subsequently, genetic algorithm is adopted and applied to the surrogate model to acquire the optimal solution in the case of satisfying some constraints. The method has been applied to the optimization design of an axisymmetric diverging duct, dealing with three design variables including one qualitative variable and two quantitative variables. The method of modeling and optimization design performs well in improving the duct aerodynamic performance and can be also applied to wider fields of mechanical design and seen as a useful tool for engineering designers, by reducing the design time and computation consumption.

  2. Reducing the risk of rear-end collisions with infrastructure-to-vehicle (I2V) integration of variable speed limit control and adaptive cruise control system.

    PubMed

    Li, Ye; Wang, Hao; Wang, Wei; Liu, Shanwen; Xiang, Yun

    2016-08-17

    Adaptive cruise control (ACC) has been investigated recently to explore ways to increase traffic capacity, stabilize traffic flow, and improve traffic safety. However, researchers seldom have studied the integration of ACC and roadside control methods such as the variable speed limit (VSL) to improve safety. The primary objective of this study was to develop an infrastructure-to-vehicle (I2V) integrated system that incorporated both ACC and VSL to reduce rear-end collision risks on freeways. The intelligent driver model was firstly modified to simulate ACC behavior and then the VSL strategy used in this article was introduced. Next, the I2V system was proposed to integrate the 2 advanced techniques, ACC and VSL. Four scenarios of no control, VSL only, ACC only, and the I2V system were tested in simulation experiments. Time exposed time to collision (TET) and time integrated time to collision (TIT), 2 surrogate safety measures derived from time to collision (TTC), were used to evaluate safety issues associated with rear-end collisions. The total travel times of each scenario were also compared. The simulation results indicated that both the VSL-only and ACC-only methods had a positive impact on reducing the TET and TIT values (reduced by 53.0 and 58.6% and 59.0 and 65.3%, respectively). The I2V system combined the advantages of both ACC and VSL to achieve the most safety benefits (reduced by 71.5 and 77.3%, respectively). Sensitivity analysis of the TTC threshold also showed that the I2V system obtained the largest safety benefits with all of the TTC threshold values. The impact of different market penetration rates of ACC vehicles in I2V system indicated that safety benefits increase with an increase in ACC proportions. Compared to VSL-only and ACC-only scenarios, this integrated I2V system is more effective in reducing rear-end collision risks. The findings of this study provide useful information for traffic agencies to implement novel techniques to improve safety on freeways.

  3. The NASA Energy and Water Cycle Extreme (NEWSE) Integration Project

    NASA Technical Reports Server (NTRS)

    House, P. R.; Lapenta, W.; Schiffer, R.

    2008-01-01

    Skillful predictions of water and energy cycle extremes (flood and drought) are elusive. To better understand the mechanisms responsible for water and energy extremes, and to make decisive progress in predicting these extremes, the collaborative NASA Energy and Water cycle Extremes (NEWSE) Integration Project, is studying these extremes in the U.S. Southern Great Plains (SGP) during 2006-2007, including their relationships with continental and global scale processes, and assessment of their predictability on multiple space and time scales. It is our hypothesis that an integrative analysis of observed extremes which reflects the current understanding of the role of SST and soil moisture variability influences on atmospheric heating and forcing of planetary waves, incorporating recently available global and regional hydro- meteorological datasets (i.e., precipitation, water vapor, clouds, etc.) in conjunction with advances in data assimilation, can lead to new insights into the factors that lead to persistent drought and flooding. We will show initial results of this project, whose goals are to provide an improved definition, attribution and prediction on sub-seasonal to interannual time scales, improved understanding of the mechanisms of decadal drought and its predictability, including the impacts of SST variability and deep soil moisture variability, and improved monitoring/attributions, with transition to applications; a bridging of the gap between hydrological forecasts and stakeholders (utilization of probabilistic forecasts, education, forecast interpretation for different sectors, assessment of uncertainties for different sectors, etc.).

  4. Robust Adaptive Flight Control Design of Air-breathing Hypersonic Vehicles

    DTIC Science & Technology

    2016-12-07

    dynamic inversion controller design for a non -minimum phase hypersonic vehicle is derived by Kuipers et al. [2008]. Moreover, integrated guidance and...stabilization time for inner loop variables is lesser than the intermediate loop variables because of the three-loop-control design methodology . The control...adaptive design . Control Engineering Practice, 2016. Michael A Bolender and David B Doman. A non -linear model for the longitudinal dynamics of a

  5. Advanced Ceramic Technology for Space Applications at NASA MSFC

    NASA Technical Reports Server (NTRS)

    Alim, Mohammad A.

    2003-01-01

    The ceramic processing technology using conventional methods is applied to the making of the state-of-the-art ceramics known as smart ceramics or intelligent ceramics or electroceramics. The sol-gel and wet chemical processing routes are excluded in this investigation considering economic aspect and proportionate benefit of the resulting product. The use of ceramic ingredients in making coatings or devices employing vacuum coating unit is also excluded in this investigation. Based on the present information it is anticipated that the conventional processing methods provide identical performing ceramics when compared to that processed by the chemical routes. This is possible when sintering temperature, heating and cooling ramps, peak temperature (sintering temperature), soak-time (hold-time), etc. are considered as variable parameters. In addition, optional calcination step prior to the sintering operation remains as a vital variable parameter. These variable parameters constitute a sintering profile to obtain a sintered product. Also it is possible to obtain identical products for more than one sintering profile attributing to the calcination step in conjunction with the variables of the sintering profile. Overall, the state-of-the-art ceramic technology is evaluated for potential thermal and electrical insulation coatings, microelectronics and integrated circuits, discrete and integrated devices, etc. applications in the space program.

  6. Disentangling Global Warming, Multidecadal Variability, and El Niño in Pacific Temperatures

    NASA Astrophysics Data System (ADS)

    Wills, Robert C.; Schneider, Tapio; Wallace, John M.; Battisti, David S.; Hartmann, Dennis L.

    2018-03-01

    A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in time scale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea surface temperature variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño-Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different time scales, but are otherwise consistent with canonical definitions. We isolate the multidecadal variability of the PDO and find that it is confined to midlatitudes; tropical sea surface temperatures and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.

  7. Robust H∞ cost guaranteed integral sliding mode control for the synchronization problem of nonlinear tele-operation system with variable time-delay.

    PubMed

    Al-Wais, Saba; Khoo, Suiyang; Lee, Tae Hee; Shanmugam, Lakshmanan; Nahavandi, Saeid

    2018-01-01

    This paper is devoted to the synchronization problem of tele-operation systems with time-varying delay, disturbances, and uncertainty. Delay-dependent sufficient conditions for the existence of integral sliding surfaces are given in the form of Linear Matrix Inequalities (LMIs). This guarantees the global stability of the tele-operation system with known upper bounds of the time-varying delays. Unlike previous work, in this paper, the controller gains are designed but not chosen, which increases the degree of freedom of the design. Moreover, Wirtinger based integral inequality and reciprocally convex combination techniques used in the constructed Lypunove-Krasoviskii Functional (LKF) are deemed to give less conservative stability condition for the system. Furthermore, to relax the analysis from any assumptions regarding the dynamics of the environment and human operator forces, H ∞ design method is used to involve the dynamics of these forces and ensure the stability of the system against these admissible forces in the H ∞ sense. This design scheme combines the strong robustness of the sliding mode control with the H ∞ design method for tele-operation systems which is coupled using state feedback controllers and inherit variable time-delays in their communication channels. Simulation examples are given to show the effectiveness of the proposed method. Copyright © 2017 ISA. All rights reserved.

  8. Documentation of a numerical code for the simulation of variable density ground-water flow in three dimensions

    USGS Publications Warehouse

    Kuiper, L.K.

    1985-01-01

    A numerical code is documented for the simulation of variable density time dependent groundwater flow in three dimensions. The groundwater density, although variable with distance, is assumed to be constant in time. The Integrated Finite Difference grid elements in the code follow the geologic strata in the modeled area. If appropriate, the determination of hydraulic head in confining beds can be deleted to decrease computation time. The strongly implicit procedure (SIP), successive over-relaxation (SOR), and eight different preconditioned conjugate gradient (PCG) methods are used to solve the approximating equations. The use of the computer program that performs the calculations in the numerical code is emphasized. Detailed instructions are given for using the computer program, including input data formats. An example simulation and the Fortran listing of the program are included. (USGS)

  9. An approximate method for solution to variable moment of inertia problems

    NASA Technical Reports Server (NTRS)

    Beans, E. W.

    1981-01-01

    An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.

  10. Darboux transformation and solitons for an integrable nonautonomous nonlinear integro-differential Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Yong, Xuelin; Fan, Yajing; Huang, Yehui; Ma, Wen-Xiu; Tian, Jing

    2017-10-01

    By modifying the scheme for an isospectral problem, the non-isospectral Ablowitz-Kaup-Newell-Segur (AKNS) hierarchy is constructed via allowing the time varying spectrum. In this paper, we consider an integrable nonautonomous nonlinear integro-differential Schrödinger equation discussed before in “Multi-soliton management by the integrable nonautonomous nonlinear integro-differential Schrödinger equation” [Y. J. Zhang, D. Zhao and H. G. Luo, Ann. Phys. 350 (2014) 112]. We first analyze the integrability conditions and identify the model. Second, we modify the existing Darboux transformation (DT) for such a non-isospectral problem. Third, the nonautonomous soliton solutions are obtained via the resulting DT and basic properties of these solutions in the inhomogeneous media are discussed graphically to illustrate the influences of the variable coefficients. In the process, a technique by selecting appropriate spectral parameters instead of the variable inhomogeneities is employed to realize a different type of one-soliton management. Several novel optical solitons are constructed and their features are shown by some specific figures. In addition, four kinds of the special localized two-soliton solutions are obtained. The solitonic excitations localized both in space and time, which exhibit the feature of the so-called rogue waves but with a zero background, are discussed.

  11. On Coarse Projective Integration for Atomic Deposition in Amorphous Systems

    DOE PAGES

    Chuang, Claire Y.; Han, Sang M.; Zepeda-Ruiz, Luis A.; ...

    2015-10-02

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of timescales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity and computational efficiency. Coarse projective integration, an example application of the ‘equation-free’ framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute gradients of slowly-evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to the application of thismore » technique in realistic settings is the ‘lifting’ operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO 2 substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO 2 using only a few measures of the island size distribution. In conclusion, the approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  12. Integration of Fixed and Flexible Route Public Transportation Systems, Phase I

    DOT National Transportation Integrated Search

    2012-01-01

    To provide efficient public transportation services in areas with high demand variability over time, it may be desirable : to switch vehicles between conventional services (with fixed routes and schedules) during peak periods and flexible : route ser...

  13. Comparing Resource Adequacy Metrics and Their Influence on Capacity Value: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibanez, E.; Milligan, M.

    2014-04-01

    Traditional probabilistic methods have been used to evaluate resource adequacy. The increasing presence of variable renewable generation in power systems presents a challenge to these methods because, unlike thermal units, variable renewable generation levels change over time because they are driven by meteorological events. Thus, capacity value calculations for these resources are often performed to simple rules of thumb. This paper follows the recommendations of the North American Electric Reliability Corporation?s Integration of Variable Generation Task Force to include variable generation in the calculation of resource adequacy and compares different reliability metrics. Examples are provided using the Western Interconnection footprintmore » under different variable generation penetrations.« less

  14. Integrating Ecosystem Carbon Dynamics into State-and-Transition Simulation Models of Land Use/Land Cover Change

    NASA Astrophysics Data System (ADS)

    Sleeter, B. M.; Daniel, C.; Frid, L.; Fortin, M. J.

    2016-12-01

    State-and-transition simulation models (STSMs) provide a general approach for incorporating uncertainty into forecasts of landscape change. Using a Monte Carlo approach, STSMs generate spatially-explicit projections of the state of a landscape based upon probabilistic transitions defined between states. While STSMs are based on the basic principles of Markov chains, they have additional properties that make them applicable to a wide range of questions and types of landscapes. A current limitation of STSMs is that they are only able to track the fate of discrete state variables, such as land use/land cover (LULC) classes. There are some landscape modelling questions, however, for which continuous state variables - for example carbon biomass - are also required. Here we present a new approach for integrating continuous state variables into spatially-explicit STSMs. Specifically we allow any number of continuous state variables to be defined for each spatial cell in our simulations; the value of each continuous variable is then simulated forward in discrete time as a stochastic process based upon defined rates of change between variables. These rates can be defined as a function of the realized states and transitions of each cell in the STSM, thus providing a connection between the continuous variables and the dynamics of the landscape. We demonstrate this new approach by (1) developing a simple IPCC Tier 3 compliant model of ecosystem carbon biomass, where the continuous state variables are defined as terrestrial carbon biomass pools and the rates of change as carbon fluxes between pools, and (2) integrating this carbon model with an existing LULC change model for the state of Hawaii, USA.

  15. Integrating real-time and manual monitored data to predict hillslope soil moisture dynamics with high spatio-temporal resolution using linear and non-linear models

    USDA-ARS?s Scientific Manuscript database

    Spatio-temporal variability of soil moisture (') is a challenge that remains to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time ' monitoring methods. This restricted the comprehensive and intensive examination of ' dynamic...

  16. The precision of locomotor odometry in humans.

    PubMed

    Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody

    2009-03-01

    Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.

  17. High-efficiency non-uniformity correction for wide dynamic linear infrared radiometry system

    NASA Astrophysics Data System (ADS)

    Li, Zhou; Yu, Yi; Tian, Qi-Jie; Chang, Song-Tao; He, Feng-Yun; Yin, Yan-He; Qiao, Yan-Feng

    2017-09-01

    Several different integration times are always set for a wide dynamic linear and continuous variable integration time infrared radiometry system, therefore, traditional calibration-based non-uniformity correction (NUC) are usually conducted one by one, and furthermore, several calibration sources required, consequently makes calibration and process of NUC time-consuming. In this paper, the difference of NUC coefficients between different integration times have been discussed, and then a novel NUC method called high-efficiency NUC, which combines the traditional calibration-based non-uniformity correction, has been proposed. It obtains the correction coefficients of all integration times in whole linear dynamic rangesonly by recording three different images of a standard blackbody. Firstly, mathematical procedure of the proposed non-uniformity correction method is validated and then its performance is demonstrated by a 400 mm diameter ground-based infrared radiometry system. Experimental results show that the mean value of Normalized Root Mean Square (NRMS) is reduced from 3.78% to 0.24% by the proposed method. In addition, the results at 4 ms and 70 °C prove that this method has a higher accuracy compared with traditional calibration-based NUC. In the meantime, at other integration time and temperature there is still a good correction effect. Moreover, it greatly reduces the number of correction time and temperature sampling point, and is characterized by good real-time performance and suitable for field measurement.

  18. Searching for Variables in one of the WHAT Fields

    NASA Astrophysics Data System (ADS)

    Shporer, A.; Mazeh, T.; Moran, A.; Bakos, G.; Kovacs, G.

    2007-07-01

    We present preliminary results on a single field observed by WHAT, a small-aperture short focal length automated telescope with an 8.2° × 8.2° deg field of view, located at the Wise Observatory. The system is similar to the members of HATNet (http://cfa-www.harvard.edu/~gbakos/HAT/) and is aimed at searching for transiting extrasolar planets and variable objects. With 5 min integration time, the telescope achieved a precision of a few mmag for the brightest objects. We detect variables with amplitudes less than 0.01 mag. All 152 periodic variables are presented at http://wise-obs.tau.ac.il/~amit/236/.

  19. The INTEGRAL long monitoring of persistent ultra compact X-ray bursters

    NASA Astrophysics Data System (ADS)

    Fiocchi, M.; Bazzano, A.; Ubertini, P.; Bird, A. J.; Natalucci, L.; Sguera, V.

    2008-12-01

    Context: The combination of compact objects, short period variability and peculiar chemical composition of the ultra compact X-ray binaries make up a very interesting laboratory to study accretion processes and thermonuclear burning on the neutron star surface. Improved large optical telescopes and more sensitive X-ray satellites have increased the number of known ultra compact X-ray binaries allowing their study with unprecedented detail. Aims: We analyze the average properties common to all ultra compact bursters observed by INTEGRAL from 0.2 keV to 150 keV. Methods: We have performed a systematic analysis of the INTEGRAL public data and Key-Program proprietary observations of a sample of the ultra compact X-ray binaries. In order to study their average properties in a very broad energy band, we combined INTEGRAL with BeppoSAX and SWIFT data whenever possible. For sources not showing any significant flux variations along the INTEGRAL monitoring, we build the average spectrum by combining all available data; in the case of variable fluxes, we use simultaneous INTEGRAL and SWIFT observations when available. Otherwise we compared IBIS and PDS data to check the variability and combine BeppoSAX with INTEGRAL /IBIS data. Results: All spectra are well represented by a two component model consisting of a disk-blackbody and Comptonised emission. The majority of these compact sources spend most of the time in a canonical low/hard state, with a dominating Comptonised component and accretion rate dot {M} lower than 10-9 {M⊙}/yr, not depending on the model used to fit the data. INTEGRAL is an ESA project with instruments and Science Data Center funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with the participation of Russia and the USA.

  20. MILP model for integrated balancing and sequencing mixed-model two-sided assembly line with variable launching interval and assignment restrictions

    NASA Astrophysics Data System (ADS)

    Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.

    2017-09-01

    This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.

  1. First Results on the Variability of Mid- and High-Latitude Ionospheric Electric Fields at 1- Second Time Scales

    NASA Astrophysics Data System (ADS)

    Ruohoniemi, J. M.; Greenwald, R. A.; Oksavik, K.; Baker, J. B.

    2007-12-01

    The electric fields at high latitudes are often modeled as a static pattern in the absence of variation in solar wind parameters or geomagnetic disturbance. However, temporal variability in the local electric fields on time scales of minutes for stable conditions has been reported and characterized statistically as an intrinsic property amounting to turbulence. We describe the results of applying a new technique to SuperDARN HF radar observations of ionospheric plasma convection at middle and high latitudes that gives views of the variability of the electric fields at sub-second time scales. We address the question of whether there is a limit to the temporal scale of the electric field variability and consider whether the turbulence on minute time scales is due to organized but unresolved behavior. The basis of the measurements is the ability to record raw samples from the individual multipulse sequences that are transmitted during the standard 3 or 6-second SuperDARN integration period; a backscattering volume is then effectively sampled at a cadence of 200 ms. The returns from the individual sequences are often sufficiently well-ordered to permit a sequence-by-sequence characterization of the electric field and backscattered power. We attempt a statistical characterization of the variability at these heretofore inaccessible time scales and consider how variability is influenced by solar wind and magentospheric factors.

  2. Packetized Video On MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1987-07-01

    Theoretical analysis of integrated local area network model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up during video and voice calls during periods of little movement in the images and periods of silence in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamicaly controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real-time multimedia workstation EDDY, which integrates video, voice, and data traffic flows. Protocols supporting variable-bandwidth, fixed-quality packetized video transport are described in detail.

  3. Modeling Framework and Validation of a Smart Grid and Demand Response System for Wind Power Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broeer, Torsten; Fuller, Jason C.; Tuffner, Francis K.

    2014-01-31

    Electricity generation from wind power and other renewable energy sources is increasing, and their variability introduces new challenges to the power system. The emergence of smart grid technologies in recent years has seen a paradigm shift in redefining the electrical system of the future, in which controlled response of the demand side is used to balance fluctuations and intermittencies from the generation side. This paper presents a modeling framework for an integrated electricity system where loads become an additional resource. The agent-based model represents a smart grid power system integrating generators, transmission, distribution, loads and market. The model incorporates generatormore » and load controllers, allowing suppliers and demanders to bid into a Real-Time Pricing (RTP) electricity market. The modeling framework is applied to represent a physical demonstration project conducted on the Olympic Peninsula, Washington, USA, and validation simulations are performed using actual dynamic data. Wind power is then introduced into the power generation mix illustrating the potential of demand response to mitigate the impact of wind power variability, primarily through thermostatically controlled loads. The results also indicate that effective implementation of Demand Response (DR) to assist integration of variable renewable energy resources requires a diversity of loads to ensure functionality of the overall system.« less

  4. Integration and application of optical chemical sensors in microbioreactors.

    PubMed

    Gruber, Pia; Marques, Marco P C; Szita, Nicolas; Mayr, Torsten

    2017-08-08

    The quantification of key variables such as oxygen, pH, carbon dioxide, glucose, and temperature provides essential information for biological and biotechnological applications and their development. Microfluidic devices offer an opportunity to accelerate research and development in these areas due to their small scale, and the fine control over the microenvironment, provided that these key variables can be measured. Optical sensors are well-suited for this task. They offer non-invasive and non-destructive monitoring of the mentioned variables, and the establishment of time-course profiles without the need for sampling from the microfluidic devices. They can also be implemented in larger systems, facilitating cross-scale comparison of analytical data. This tutorial review presents an overview of the optical sensors and their technology, with a view to support current and potential new users in microfluidics and biotechnology in the implementation of such sensors. It introduces the benefits and challenges of sensor integration, including, their application for microbioreactors. Sensor formats, integration methods, device bonding options, and monitoring options are explained. Luminescent sensors for oxygen, pH, carbon dioxide, glucose and temperature are showcased. Areas where further development is needed are highlighted with the intent to guide future development efforts towards analytes for which reliable, stable, or easily integrated detection methods are not yet available.

  5. From Chains for Mean Value Inequalities to Mitrinovic's Problem II

    ERIC Educational Resources Information Center

    Zhu, Ling

    2005-01-01

    In this note, an integrated form of some significant means with two variables is provided, and some chains for mean value inequalities are obtained. At the same time, a concise family of algebraic functions appears, which satisfy Mitrinovic's requirements.

  6. Mean convergence theorems and weak laws of large numbers for weighted sums of random variables under a condition of weighted integrability

    NASA Astrophysics Data System (ADS)

    Ordóñez Cabrera, Manuel; Volodin, Andrei I.

    2005-05-01

    From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.

  7. Role of Smarter Grids in Variable Renewable Resource Integration (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.

    2012-07-01

    This presentation discusses the role of smarter grids in variable renewable resource integration and references material from a forthcoming ISGAN issue paper: Smart Grid Contributions to Variable Renewable Resource Integration, co-written by the presenter and currently in review.

  8. Effects of temporal averaging on short-term irradiance variability under mixed sky conditions

    NASA Astrophysics Data System (ADS)

    Lohmann, Gerald M.; Monahan, Adam H.

    2018-05-01

    Characterizations of short-term variability in solar radiation are required to successfully integrate large numbers of photovoltaic power systems into the electrical grid. Previous studies have used ground-based irradiance observations with a range of different temporal resolutions and a systematic analysis of the effects of temporal averaging on the representation of variability is lacking. Using high-resolution surface irradiance data with original temporal resolutions between 0.01 and 1 s from six different locations in the Northern Hemisphere, we characterize the changes in representation of temporal variability resulting from time averaging. In this analysis, we condition all data to states of mixed skies, which are the most potentially problematic in terms of local PV power volatility. Statistics of clear-sky index k* and its increments Δk*τ (i.e., normalized surface irradiance and changes therein over specified intervals of time) are considered separately. Our results indicate that a temporal averaging time scale of around 1 s marks a transition in representing single-point irradiance variability, such that longer averages result in substantial underestimates of variability. Higher-resolution data increase the complexity of data management and quality control without appreciably improving the representation of variability. The results do not show any substantial discrepancies between locations or seasons.

  9. Integration of real time kinematic satellite navigation with ground-penetrating radar surveys

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture, environmental mapping, and construction benefit from subsurface imaging by revealing the spatial variability of underground features. Features surveyed of agricultural interest are bedrock depth, soil horizon thicknesses, and buried–object features such as drainage pipe. For t...

  10. Instance-Based Learning: Integrating Sampling and Repeated Decisions from Experience

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Dutt, Varun

    2011-01-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or…

  11. A healthy heart is not a metronome: an integrative review of the heart's anatomy and heart rate variability.

    PubMed

    Shaffer, Fred; McCraty, Rollin; Zerr, Christopher L

    2014-01-01

    Heart rate variability (HRV), the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operate on different time scales to adapt to challenges and achieve optimal performance. This article briefly reviews neural regulation of the heart, and its basic anatomy, the cardiac cycle, and the sinoatrial and atrioventricular pacemakers. The cardiovascular regulation center in the medulla integrates sensory information and input from higher brain centers, and afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. This article reviews sympathetic and parasympathetic influences on the heart, and examines the interpretation of HRV and the association between reduced HRV, risk of disease and mortality, and the loss of regulatory capacity. This article also discusses the intrinsic cardiac nervous system and the heart-brain connection, through which afferent information can influence activity in the subcortical and frontocortical areas, and motor cortex. It also considers new perspectives on the putative underlying physiological mechanisms and properties of the ultra-low-frequency (ULF), very-low-frequency (VLF), low-frequency (LF), and high-frequency (HF) bands. Additionally, it reviews the most common time and frequency domain measurements as well as standardized data collection protocols. In its final section, this article integrates Porges' polyvagal theory, Thayer and colleagues' neurovisceral integration model, Lehrer et al.'s resonance frequency model, and the Institute of HeartMath's coherence model. The authors conclude that a coherent heart is not a metronome because its rhythms are characterized by both complexity and stability over longer time scales. Future research should expand understanding of how the heart and its intrinsic nervous system influence the brain.

  12. A healthy heart is not a metronome: an integrative review of the heart's anatomy and heart rate variability

    PubMed Central

    Shaffer, Fred; McCraty, Rollin; Zerr, Christopher L.

    2014-01-01

    Heart rate variability (HRV), the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operate on different time scales to adapt to challenges and achieve optimal performance. This article briefly reviews neural regulation of the heart, and its basic anatomy, the cardiac cycle, and the sinoatrial and atrioventricular pacemakers. The cardiovascular regulation center in the medulla integrates sensory information and input from higher brain centers, and afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. This article reviews sympathetic and parasympathetic influences on the heart, and examines the interpretation of HRV and the association between reduced HRV, risk of disease and mortality, and the loss of regulatory capacity. This article also discusses the intrinsic cardiac nervous system and the heart-brain connection, through which afferent information can influence activity in the subcortical and frontocortical areas, and motor cortex. It also considers new perspectives on the putative underlying physiological mechanisms and properties of the ultra-low-frequency (ULF), very-low-frequency (VLF), low-frequency (LF), and high-frequency (HF) bands. Additionally, it reviews the most common time and frequency domain measurements as well as standardized data collection protocols. In its final section, this article integrates Porges' polyvagal theory, Thayer and colleagues' neurovisceral integration model, Lehrer et al.'s resonance frequency model, and the Institute of HeartMath's coherence model. The authors conclude that a coherent heart is not a metronome because its rhythms are characterized by both complexity and stability over longer time scales. Future research should expand understanding of how the heart and its intrinsic nervous system influence the brain. PMID:25324790

  13. Pulmonary venous flow determinants of left atrial pressure under different loading conditions in a chronic animal model with mitral regurgitation

    NASA Technical Reports Server (NTRS)

    Yang, Hua; Jones, Michael; Shiota, Takahiro; Qin, Jian Xin; Kim, Yong Jin; Popovic, Zoran B.; Pu, Min; Greenberg, Neil L.; Cardon, Lisa A.; Eto, Yoko; hide

    2002-01-01

    BACKGROUND: The aim of our study was to quantitatively compare the changes and correlations between pulmonary venous flow variables and mean left atrial pressure (mLAP) under different loading conditions in animals with chronic mitral regurgitation (MR) and without MR. METHODS: A total of 85 hemodynamic conditions were studied in 22 sheep, 12 without MR as control (NO-MR group) and 10 with MR (MR group). We obtained pulmonary venous flow systolic velocity (Sv) and diastolic velocity (Dv), Sv and Dv time integrals, their ratios (Sv/Dv and Sv/Dv time integral), mLAP, left ventricular end-diastolic pressure, and MR stroke volume. We also measured left atrial a, x, v, and y pressures and calculated the difference between v and y pressures. RESULTS: Average MR stroke volume was 10.6 +/- 4.3 mL/beat. There were good correlations between Sv (r = -0.64 and r = -0.59, P <.01), Sv/Dv (r = -0.62 and r = -0.74, P <.01), and mLAP in the MR and NO-MR groups, respectively. Correlations were also observed between Dv time integral (r = 0.61 and r = 0.57, P <.01) and left ventricular end-diastolic pressure in the MR and NO-MR groups. In velocity variables, Sv (r = -0.79, P <.001) was the best predictor of mLAP in both groups. The sensitivity and specificity of Sv = 0 in predicting mLAP 15 mm Hg or greater were 86% and 85%, respectively. CONCLUSION: Pulmonary venous flow variables correlated well with mLAP under altered loading conditions in the MR and NO-MR groups. They may be applied clinically as substitutes for invasively acquired indexes of mLAP to assess left atrial and left ventricular functional status.

  14. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  15. Alternative bi-Hamiltonian structures for WDVV equations of associativity

    NASA Astrophysics Data System (ADS)

    Kalayci, J.; Nutku, Y.

    1998-01-01

    The WDVV equations of associativity in two-dimensional topological field theory are completely integrable third-order Monge-Ampère equations which admit bi-Hamiltonian structure. The time variable plays a distinguished role in the discussion of Hamiltonian structure, whereas in the theory of WDVV equations none of the independent variables merits such a distinction. WDVV equations admit very different alternative Hamiltonian structures under different possible choices of the time variable, but all these various Hamiltonian formulations can be brought together in the framework of the covariant theory of symplectic structure. They can be identified as different components of the covariant Witten-Zuckerman symplectic 2-form current density where a variational formulation of the WDVV equation that leads to the Hamiltonian operator through the Dirac bracket is available.

  16. Simple and Double Alfven Waves: Hamiltonian Aspects

    NASA Astrophysics Data System (ADS)

    Webb, G. M.; Zank, G. P.; Hu, Q.; le Roux, J. A.; Dasgupta, B.

    2011-12-01

    We discuss the nature of simple and double Alfvén waves. Simple waves depend on a single phase variable \\varphi, but double waves depend on two independent phase variables \\varphi1 and \\varphi2. The phase variables depend on the space and time coordinates x and t. Simple and double Alfvén waves have the same integrals, namely, the entropy, density, magnetic pressure, and group velocity (the sum of the Alfvén and fluid velocities) are constant throughout the flow. We present examples of both simple and double Alfvén waves, and discuss Hamiltonian formulations of the waves.

  17. The Effect of Experimental Variables on Industrial X-Ray Micro-Computed Sensitivity

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, Richard W.

    2014-01-01

    A study was performed on the effect of experimental variables on radiographic sensitivity (image quality) in x-ray micro-computed tomography images for a high density thin wall metallic cylinder containing micro-EDM holes. Image quality was evaluated in terms of signal-to-noise ratio, flaw detectability, and feature sharpness. The variables included: day-to-day reproducibility, current, integration time, voltage, filtering, number of frame averages, number of projection views, beam width, effective object radius, binning, orientation of sample, acquisition angle range (180deg to 360deg), and directional versus transmission tube.

  18. Stress and Fracture Analyses Under Elastic-plastic and Creep Conditions: Some Basic Developments and Computational Approaches

    NASA Technical Reports Server (NTRS)

    Reed, K. W.; Stonesifer, R. B.; Atluri, S. N.

    1983-01-01

    A new hybrid-stress finite element algorith, suitable for analyses of large quasi-static deformations of inelastic solids, is presented. Principal variables in the formulation are the nominal stress-rate and spin. A such, a consistent reformulation of the constitutive equation is necessary, and is discussed. The finite element equations give rise to an initial value problem. Time integration has been accomplished by Euler and Runge-Kutta schemes and the superior accuracy of the higher order schemes is noted. In the course of integration of stress in time, it has been demonstrated that classical schemes such as Euler's and Runge-Kutta may lead to strong frame-dependence. As a remedy, modified integration schemes are proposed and the potential of the new schemes for suppressing frame dependence of numerically integrated stress is demonstrated. The topic of the development of valid creep fracture criteria is also addressed.

  19. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  20. On the fractional Eulerian numbers and equivalence of maps with long term power-law memory (integral Volterra equations of the second kind) to Grünvald-Letnikov fractional difference (differential) equations.

    PubMed

    Edelman, Mark

    2015-07-01

    In this paper, we consider a simple general form of a deterministic system with power-law memory whose state can be described by one variable and evolution by a generating function. A new value of the system's variable is a total (a convolution) of the generating functions of all previous values of the variable with weights, which are powers of the time passed. In discrete cases, these systems can be described by difference equations in which a fractional difference on the left hand side is equal to a total (also a convolution) of the generating functions of all previous values of the system's variable with the fractional Eulerian number weights on the right hand side. In the continuous limit, the considered systems can be described by the Grünvald-Letnikov fractional differential equations, which are equivalent to the Volterra integral equations of the second kind. New properties of the fractional Eulerian numbers and possible applications of the results are discussed.

  1. Forecasting Lightning Threat Using WRF Proxy Fields

    NASA Technical Reports Server (NTRS)

    McCaul, E. W., Jr.

    2010-01-01

    Objectives: Given that high-resolution WRF forecasts can capture the character of convective outbreaks, we seek to: 1. Create WRF forecasts of LTG threat (1-24 h), based on 2 proxy fields from explicitly simulated convection: - graupel flux near -15 C (captures LTG time variability) - vertically integrated ice (captures LTG threat area). 2. Calibrate each threat to yield accurate quantitative peak flash rate densities. 3. Also evaluate threats for areal coverage, time variability. 4. Blend threats to optimize results. 5. Examine sensitivity to model mesh, microphysics. Methods: 1. Use high-resolution 2-km WRF simulations to prognose convection for a diverse series of selected case studies. 2. Evaluate graupel fluxes; vertically integrated ice (VII). 3. Calibrate WRF LTG proxies using peak total LTG flash rate densities from NALMA; relationships look linear, with regression line passing through origin. 4. Truncate low threat values to make threat areal coverage match NALMA flash extent density obs. 5. Blend proxies to achieve optimal performance 6. Study CAPS 4-km ensembles to evaluate sensitivities.

  2. The Trapping Index: How to integrate the Eulerian and the Lagrangian approach for the computation of the transport time scales of semi-enclosed basins.

    PubMed

    Cucco, Andrea; Umgiesser, Georg

    2015-09-15

    In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Time-Warp–Invariant Neuronal Processing

    PubMed Central

    Gütig, Robert; Sompolinsky, Haim

    2009-01-01

    Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities. PMID:19582146

  4. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers' Adaptations.

    PubMed

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers' technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers' technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008-2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4-66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers' practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations.

  5. An Integrated Method to Analyze Farm Vulnerability to Climatic and Economic Variability According to Farm Configurations and Farmers’ Adaptations

    PubMed Central

    Martin, Guillaume; Magne, Marie-Angélina; Cristobal, Magali San

    2017-01-01

    The need to adapt to decrease farm vulnerability to adverse contextual events has been extensively discussed on a theoretical basis. We developed an integrated and operational method to assess farm vulnerability to multiple and interacting contextual changes and explain how this vulnerability can best be reduced according to farm configurations and farmers’ technical adaptations over time. Our method considers farm vulnerability as a function of the raw measurements of vulnerability variables (e.g., economic efficiency of production), the slope of the linear regression of these measurements over time, and the residuals of this linear regression. The last two are extracted from linear mixed models considering a random regression coefficient (an intercept common to all farms), a global trend (a slope common to all farms), a random deviation from the general mean for each farm, and a random deviation from the general trend for each farm. Among all possible combinations, the lowest farm vulnerability is obtained through a combination of high values of measurements, a stable or increasing trend and low variability for all vulnerability variables considered. Our method enables relating the measurements, trends and residuals of vulnerability variables to explanatory variables that illustrate farm exposure to climatic and economic variability, initial farm configurations and farmers’ technical adaptations over time. We applied our method to 19 cattle (beef, dairy, and mixed) farms over the period 2008–2013. Selected vulnerability variables, i.e., farm productivity and economic efficiency, varied greatly among cattle farms and across years, with means ranging from 43.0 to 270.0 kg protein/ha and 29.4–66.0% efficiency, respectively. No farm had a high level, stable or increasing trend and low residuals for both farm productivity and economic efficiency of production. Thus, the least vulnerable farms represented a compromise among measurement value, trend, and variability of both performances. No specific combination of farmers’ practices emerged for reducing cattle farm vulnerability to climatic and economic variability. In the least vulnerable farms, the practices implemented (stocking rate, input use…) were more consistent with the objective of developing the properties targeted (efficiency, robustness…). Our method can be used to support farmers with sector-specific and local insights about most promising farm adaptations. PMID:28900435

  6. A hybrid disturbance rejection control solution for variable valve timing system of gasoline engines.

    PubMed

    Xie, Hui; Song, Kang; He, Yu

    2014-07-01

    A novel solution for electro-hydraulic variable valve timing (VVT) system of gasoline engines is proposed, based on the concept of active disturbance rejection control (ADRC). Disturbances, such as oil pressure and engine speed variations, are all estimated and mitigated in real-time. A feed-forward controller was added to enhance the performance of the system based on a simple and static first principle model, forming a hybrid disturbance rejection control (HDRC) strategy. HDRC was validated by experimentation and compared with an existing manually tuned proportional-integral (PI) controller. The results show that HDRC provided a faster response and better tolerance of engine speed and oil pressure variations. © 2013 ISA Published by ISA All rights reserved.

  7. Extending the trans-contextual model in physical education and leisure-time contexts: examining the role of basic psychological need satisfaction.

    PubMed

    Barkoukis, Vassilis; Hagger, Martin S; Lambropoulos, George; Tsorbatzoudis, Haralambos

    2010-12-01

    The trans-contextual model (TCM) is an integrated model of motivation that aims to explain the processes by which agentic support for autonomous motivation in physical education promotes autonomous motivation and physical activity in a leisure-time context. It is proposed that perceived support for autonomous motivation in physical education is related to autonomous motivation in physical education and leisure-time contexts. Furthermore, relations between autonomous motivation and the immediate antecedents of intentions to engage in physical activity behaviour and actual behaviour are hypothesized. The purpose of the present study was to incorporate the constructs of basic psychological need satisfaction in the TCM to provide a more comprehensive explanation of motivation and demonstrate the robustness of the findings of previous tests of the model that have not incorporated these constructs. Students (N=274) from Greek secondary schools. Participants completed self-report measures of perceived autonomy support, autonomous motivation, and basic psychological need satisfaction in physical education. Follow-up measures of these variables were taken in a leisure-time context along with measures of attitudes, subjective norms, perceived behavioural control (PBC), and intentions from the theory of planned behaviour 1 week later. Self-reported physical activity behaviour was measured 4 weeks later. Results supported TCM hypotheses. Basic psychological need satisfaction variables uniquely predicted autonomous motivation in physical education and leisure time as well as the antecedents of intention, namely, attitudes, and PBC. The basic psychological need satisfaction variables also mediated the effects of perceived autonomy support on autonomous motivation in physical education. Findings support the TCM and provide further information of the mechanisms in the model and integrated theories of motivation in physical education and leisure time.

  8. Integration of Systems with Varying Levels of Autonomy (Integration de Systemes a Niveau d’Autonomie Variable)

    DTIC Science & Technology

    2008-09-01

    SCI-144 Integration of Systems with Varying Levels of Autonomy (Intégration de systèmes à niveau d’autonomie variable) This Report was...prepared by Task Group SCI-144 on “ System -Level Integration of Control plus Automation” and has been sponsored by the Systems Concepts and Integration... Systems with Varying Levels of Autonomy (Intégration de systèmes à niveau d’autonomie variable) This Report was prepared by Task Group SCI-144 on

  9. Integration of Systems with Varying Levels of Autonomy (Integration de systemes a niveau d’autonomie variable)

    DTIC Science & Technology

    2008-09-01

    SCI-144 Integration of Systems with Varying Levels of Autonomy (Intégration de systèmes à niveau d’autonomie variable) This Report was...prepared by Task Group SCI-144 on “ System -Level Integration of Control plus Automation” and has been sponsored by the Systems Concepts and Integration... Systems with Varying Levels of Autonomy (Intégration de systèmes à niveau d’autonomie variable) This Report was prepared by Task Group SCI-144 on

  10. A Mixed Model for Real-Time, Interactive Simulation of a Cable Passing Through Several Pulleys

    NASA Astrophysics Data System (ADS)

    García-Fernández, Ignacio; Pla-Castells, Marta; Martínez-Durá, Rafael J.

    2007-09-01

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  11. Cortical Action Potential Backpropagation Explains Spike Threshold Variability and Rapid-Onset Kinetics

    PubMed Central

    Yu, Yuguo; Shu, Yousheng; McCormick, David A.

    2008-01-01

    Neocortical action potential responses in vivo are characterized by considerable threshold variability, and thus timing and rate variability, even under seemingly identical conditions. This finding suggests that cortical ensembles are required for accurate sensorimotor integration and processing. Intracellularly, trial-to-trial variability results not only from variation in synaptic activities, but also in the transformation of these into patterns of action potentials. Through simultaneous axonal and somatic recordings and computational simulations, we demonstrate that the initiation of action potentials in the axon initial segment followed by backpropagation of these spikes throughout the neuron results in a distortion of the relationship between the timing of synaptic and action potential events. In addition, this backpropagation also results in an unusually high rate of rise of membrane potential at the foot of the action potential. The distortion of the relationship between the amplitude time course of synaptic inputs and action potential output caused by spike back-propagation results in the appearance of high spike threshold variability at the level of the soma. At the point of spike initiation, the axon initial segment, threshold variability is considerably less. Our results indicate that spike generation in cortical neurons is largely as expected by Hodgkin—Huxley theory and is more precise than previously thought. PMID:18632930

  12. Heart Rate Variability: New Perspectives on Physiological Mechanisms, Assessment of Self-regulatory Capacity, and Health risk

    PubMed Central

    Shaffer, Fred

    2015-01-01

    Heart rate variability, the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operates on different time scales to adapt to environmental and psychological challenges. This article briefly reviews neural regulation of the heart and offers some new perspectives on mechanisms underlying the very low frequency rhythm of heart rate variability. Interpretation of heart rate variability rhythms in the context of health risk and physiological and psychological self-regulatory capacity assessment is discussed. The cardiovascular regulatory centers in the spinal cord and medulla integrate inputs from higher brain centers with afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. We also discuss the intrinsic cardiac nervous system and the heart-brain connection pathways, through which afferent information can influence activity in the subcortical, frontocortical, and motor cortex areas. In addition, the use of real-time HRV feedback to increase self-regulatory capacity is reviewed. We conclude that the heart's rhythms are characterized by both complexity and stability over longer time scales that reflect both physiological and psychological functional status of these internal self-regulatory systems. PMID:25694852

  13. Heart Rate Variability: New Perspectives on Physiological Mechanisms, Assessment of Self-regulatory Capacity, and Health risk.

    PubMed

    McCraty, Rollin; Shaffer, Fred

    2015-01-01

    Heart rate variability, the change in the time intervals between adjacent heartbeats, is an emergent property of interdependent regulatory systems that operates on different time scales to adapt to environmental and psychological challenges. This article briefly reviews neural regulation of the heart and offers some new perspectives on mechanisms underlying the very low frequency rhythm of heart rate variability. Interpretation of heart rate variability rhythms in the context of health risk and physiological and psychological self-regulatory capacity assessment is discussed. The cardiovascular regulatory centers in the spinal cord and medulla integrate inputs from higher brain centers with afferent cardiovascular system inputs to adjust heart rate and blood pressure via sympathetic and parasympathetic efferent pathways. We also discuss the intrinsic cardiac nervous system and the heart-brain connection pathways, through which afferent information can influence activity in the subcortical, frontocortical, and motor cortex areas. In addition, the use of real-time HRV feedback to increase self-regulatory capacity is reviewed. We conclude that the heart's rhythms are characterized by both complexity and stability over longer time scales that reflect both physiological and psychological functional status of these internal self-regulatory systems.

  14. An integrated, indicator framework for assessing large-scale variations and change in seasonal timing and phenology (Invited)

    NASA Astrophysics Data System (ADS)

    Betancourt, J. L.; Weltzin, J. F.

    2013-12-01

    As part of an effort to develop an Indicator System for the National Climate Assessment (NCA), the Seasonality and Phenology Indicators Technical Team (SPITT) proposed an integrated, continental-scale framework for understanding and tracking seasonal timing in physical and biological systems. The framework shares several metrics with the EPA's National Climate Change Indicators. The SPITT framework includes a comprehensive suite of national indicators to track conditions, anticipate vulnerabilities, and facilitate intervention or adaptation to the extent possible. Observed, modeled, and forecasted seasonal timing metrics can inform a wide spectrum of decisions on federal, state, and private lands in the U.S., and will be pivotal for international efforts to mitigation and adaptation. Humans use calendars both to understand the natural world and to plan their lives. Although the seasons are familiar concepts, we lack a comprehensive understanding of how variability arises in the timing of seasonal transitions in the atmosphere, and how variability and change translate and propagate through hydrological, ecological and human systems. For example, the contributions of greenhouse warming and natural variability to secular trends in seasonal timing are difficult to disentangle, including earlier spring transitions from winter (strong westerlies) to summer (weak easterlies) patterns of atmospheric circulation; shifts in annual phasing of daily temperature means and extremes; advanced timing of snow and ice melt and soil thaw at higher latitudes and elevations; and earlier start and longer duration of the growing and fire seasons. The SPITT framework aims to relate spatiotemporal variability in surface climate to (1) large-scale modes of natural climate variability and greenhouse gas-driven climatic change, and (2) spatiotemporal variability in hydrological, ecological and human responses and impacts. The hierarchical framework relies on ground and satellite observations, and includes metrics of surface climate seasonality, seasonality of snow and ice, land surface phenology, ecosystem disturbance seasonality, and organismal phenology. Recommended metrics met the following requirements: (a) easily measured by day-of-year, number of days, or in the case of species migrations, by the latitude of the observation on a given date; (b) are observed or can be calculated across a high density of locations in many different regions of the U.S.; and (c) unambiguously describe both spatial and temporal variability and trends in seasonal timing that are climatically driven. The SPITT framework is meant to provide climatic and strategic guidance for the growth and application of seasonal timing and phenological monitoring efforts. The hope is that additional national indicators based on observed phenology, or evidence-based algorithms calibrated with observational data, will evolve with sustained and broad-scale monitoring of climatically sensitive species and ecological processes.

  15. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    DOE PAGES

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-02

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. However, how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer 'how far is far enough,' we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25–2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation lengthmore » $$\\xi $$ that falls at least as fast as $${{\\tau }^{-1}}$$ . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.« less

  16. Variability of interconnected wind plants: correlation length and its dependence on variability time scale

    NASA Astrophysics Data System (ADS)

    St. Martin, Clara M.; Lundquist, Julie K.; Handschy, Mark A.

    2015-04-01

    The variability in wind-generated electricity complicates the integration of this electricity into the electrical grid. This challenge steepens as the percentage of renewably-generated electricity on the grid grows, but variability can be reduced by exploiting geographic diversity: correlations between wind farms decrease as the separation between wind farms increases. But how far is far enough to reduce variability? Grid management requires balancing production on various timescales, and so consideration of correlations reflective of those timescales can guide the appropriate spatial scales of geographic diversity grid integration. To answer ‘how far is far enough,’ we investigate the universal behavior of geographic diversity by exploring wind-speed correlations using three extensive datasets spanning continents, durations and time resolution. First, one year of five-minute wind power generation data from 29 wind farms span 1270 km across Southeastern Australia (Australian Energy Market Operator). Second, 45 years of hourly 10 m wind-speeds from 117 stations span 5000 km across Canada (National Climate Data Archive of Environment Canada). Finally, four years of five-minute wind-speeds from 14 meteorological towers span 350 km of the Northwestern US (Bonneville Power Administration). After removing diurnal cycles and seasonal trends from all datasets, we investigate dependence of correlation length on time scale by digitally high-pass filtering the data on 0.25-2000 h timescales and calculating correlations between sites for each high-pass filter cut-off. Correlations fall to zero with increasing station separation distance, but the characteristic correlation length varies with the high-pass filter applied: the higher the cut-off frequency, the smaller the station separation required to achieve de-correlation. Remarkable similarities between these three datasets reveal behavior that, if universal, could be particularly useful for grid management. For high-pass filter time constants shorter than about τ = 38 h, all datasets exhibit a correlation length ξ that falls at least as fast as {{τ }-1} . Since the inter-site separation needed for statistical independence falls for shorter time scales, higher-rate fluctuations can be effectively smoothed by aggregating wind plants over areas smaller than otherwise estimated.

  17. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  18. Construction of optimum controls and trajectories of motion of the center of masses of a spacecraft equipped with the solar sail and low-thrust engine, using quaternions and Kustaanheimo-Stiefel variables

    NASA Astrophysics Data System (ADS)

    Sapunkov, Ya. G.; Chelnokov, Yu. N.

    2014-11-01

    The problem of optimum rendezvous of a controllable spacecraft (SC) with an uncontrollable spacecraft, moving over a Keplerian elliptic orbit in the gravitational field of the Sun, is considered. Control of the SC is performed using a solar sail and low-thrust engine. For solving the problem, the regular quaternion equations of the two-body problem with the Kustaanheimo-Stiefel variables and the Pontryagin maximum principle are used. The combined integral quality functional, which characterizes energy consumption for controllable SC transition from an initial to final state and the time spent for this transition, is used as a minimized functional. The differential boundary-value optimization problems are formulated, and their first integrals are found. Examples of numerical solution of problems are presented. The paper develops the application [1-6] of quaternion regular equations with the Kustaanheimo-Stiefel variables in the space flight mechanics.

  19. When Can Information from Ordinal Scale Variables Be Integrated?

    ERIC Educational Resources Information Center

    Kemp, Simon; Grace, Randolph C.

    2010-01-01

    Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…

  20. Functional entropy variables: A new methodology for deriving thermodynamically consistent algorithms for complex fluids, with particular reference to the isothermal Navier–Stokes–Korteweg equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.

    2013-09-01

    We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less

  1. Lateral orbitofrontal cortex anticipates choices and integrates prior with current information

    PubMed Central

    Nogueira, Ramon; Abolafia, Juan M.; Drugowitsch, Jan; Balaguer-Ballester, Emili; Sanchez-Vives, Maria V.; Moreno-Bote, Rubén

    2017-01-01

    Adaptive behavior requires integrating prior with current information to anticipate upcoming events. Brain structures related to this computation should bring relevant signals from the recent past into the present. Here we report that rats can integrate the most recent prior information with sensory information, thereby improving behavior on a perceptual decision-making task with outcome-dependent past trial history. We find that anticipatory signals in the orbitofrontal cortex about upcoming choice increase over time and are even present before stimulus onset. These neuronal signals also represent the stimulus and relevant second-order combinations of past state variables. The encoding of choice, stimulus and second-order past state variables resides, up to movement onset, in overlapping populations. The neuronal representation of choice before stimulus onset and its build-up once the stimulus is presented suggest that orbitofrontal cortex plays a role in transforming immediate prior and stimulus information into choices using a compact state-space representation. PMID:28337990

  2. An Integrated Theatre Production for School Nutrition Promotion Program

    PubMed Central

    Bush, Robert; Box, Selina; McCallum, David; Khalil, Stephanie

    2018-01-01

    In the context of stubbornly high childhood obesity rates, health promotion activities in schools provide a potential avenue to improve children’s nutritional behaviours. Theatre production has a rich history as a health behaviour promotion strategy but lacks sound, outcome-based evaluation. This study evaluated the effect of an integrated, two-part, place-based theatre performance program with 212 students in five schools in a regional urban and semi-rural area. The program included a theatre performance and a healthy eating competition. A brief survey assessed student healthy eating knowledge and attitudes at three time points. Nutrition behaviour was measured by scoring the contents of children’s lunch boxes before, during and up to six weeks after the intervention. Statistical analysis tested change over time on five variables (Knowledge, Attitude, Sometimes foods, Everyday foods, Overall lunch box score). Results showed that both components of the integrated program improved nutrition knowledge and that the theatre performance improved children’s healthy eating attitudes. All three lunch box scores peaked after the integrated program and remained significantly higher than baseline at 4–6 weeks follow-up. Interaction effects were identified for school catchment area on four of the five dependent variables. Evaluation of this integrated theatre production program indicates the potential benefit of taking a “super-setting” approach. It demonstrates an effect from students taking home information they had learned and incorporating it into lunch box preparation. It also showed consistent effects for school geographical catchment. This study suggests that, with careful, theory-based design, theatre productions in schools can improve student nutritional activities. PMID:29498690

  3. An Integrated Theatre Production for School Nutrition Promotion Program.

    PubMed

    Bush, Robert; Capra, Sandra; Box, Selina; McCallum, David; Khalil, Stephanie; Ostini, Remo

    2018-03-02

    In the context of stubbornly high childhood obesity rates, health promotion activities in schools provide a potential avenue to improve children's nutritional behaviours. Theatre production has a rich history as a health behaviour promotion strategy but lacks sound, outcome-based evaluation. This study evaluated the effect of an integrated, two-part, place-based theatre performance program with 212 students in five schools in a regional urban and semi-rural area. The program included a theatre performance and a healthy eating competition. A brief survey assessed student healthy eating knowledge and attitudes at three time points. Nutrition behaviour was measured by scoring the contents of children's lunch boxes before, during and up to six weeks after the intervention. Statistical analysis tested change over time on five variables (Knowledge, Attitude, Sometimes foods, Everyday foods, Overall lunch box score). Results showed that both components of the integrated program improved nutrition knowledge and that the theatre performance improved children's healthy eating attitudes. All three lunch box scores peaked after the integrated program and remained significantly higher than baseline at 4-6 weeks follow-up. Interaction effects were identified for school catchment area on four of the five dependent variables. Evaluation of this integrated theatre production program indicates the potential benefit of taking a "super-setting" approach. It demonstrates an effect from students taking home information they had learned and incorporating it into lunch box preparation. It also showed consistent effects for school geographical catchment. This study suggests that, with careful, theory-based design, theatre productions in schools can improve student nutritional activities.

  4. Volitional Aspects of Multimedia Learning

    ERIC Educational Resources Information Center

    Deimann, Markus; Keller, John M.

    2006-01-01

    Research on multimedia learning has produced a vast body of findings which, however, are not yet being integrated into a comprehensive framework of reference. For a considerable time, cognitive centered approaches have dominated the literature. Although motivational variables are now being taken into account, there is still a large gap in regard…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Fernandez, Ignacio; Pla-Castells, Marta; Martinez-Dura, Rafael J.

    A model of a cable and pulleys is presented that can be used in Real Time Computer Graphics applications. The model is formulated by the coupling of a damped spring and a variable coefficient wave equation, and can be integrated in more complex mechanical models of lift systems, such as cranes, elevators, etc. with a high degree of interactivity.

  6. Application of Different Statistical Techniques in Integrated Logistics Support of the International Space Station Alpha

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station.

  7. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  8. Sleep Variability in Adolescence is Associated with Altered Brain Development

    PubMed Central

    Telzer, Eva H.; Goldenberg, Diane; Fuligni, Andrew J.; Lieberman, Matthew D.; Galvan, Adriana

    2015-01-01

    Despite the known importance of sleep for brain development, and the sharp increase in poor sleep during adolescence, we know relatively little about how sleep impacts the developing brain. We present the first longitudinal study to examine how sleep during adolescence is associated with white matter integrity. We find that greater variability in sleep duration one year prior to a DTI scan is associated with lower white matter integrity above and beyond the effects of sleep duration, and variability in bedtime, whereas sleep variability a few months prior to the scan is not associated with white matter integrity. Thus, variability in sleep duration during adolescence may have long-term impairments on the developing brain. White matter integrity should be increasing during adolescence, and so sleep variability is directly at odds with normative developmental trends. PMID:26093368

  9. A Search for Giant Planet Companions to T Tauri Stars

    DTIC Science & Technology

    2012-12-20

    yielded a spectral resolving power of R ≡ (λ/Δλ) ≈ 60,000. Integration times were typically 1800 s (depending on conditions) and typical seeing was∼2...wavelength regions. This suggests different physical mechanisms underlying the optical and the K-band variability. Key words: planets and satellites ...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data

  10. Integrated modelling of crop production and nitrate leaching with the Daisy model.

    PubMed

    Manevski, Kiril; Børgesen, Christen D; Li, Xiaoxin; Andersen, Mathias N; Abrahamsen, Per; Hu, Chunsheng; Hansen, Søren

    2016-01-01

    An integrated modelling strategy was designed and applied to the Soil-Vegetation-Atmosphere Transfer model Daisy for simulation of crop production and nitrate leaching under pedo-climatic and agronomic environment different than that of model original parameterisation. The points of significance and caution in the strategy are: •Model preparation should include field data in detail due to the high complexity of the soil and the crop processes simulated with process-based model, and should reflect the study objectives. Inclusion of interactions between parameters in a sensitivity analysis results in better account for impacts on outputs of measured variables.•Model evaluation on several independent data sets increases robustness, at least on coarser time scales such as month or year. It produces a valuable platform for adaptation of the model to new crops or for the improvement of the existing parameters set. On daily time scale, validation for highly dynamic variables such as soil water transport remains challenging. •Model application is demonstrated with relevance for scientists and regional managers. The integrated modelling strategy is applicable for other process-based models similar to Daisy. It is envisaged that the strategy establishes model capability as a useful research/decision-making, and it increases knowledge transferability, reproducibility and traceability.

  11. Bayesian Maximum Entropy Integration of Ozone Observations and Model Predictions: A National Application.

    PubMed

    Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William

    2016-04-19

    To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.

  12. Integrating Mediators and Moderators in Research Design

    ERIC Educational Resources Information Center

    MacKinnon, David P.

    2011-01-01

    The purpose of this article is to describe mediating variables and moderating variables and provide reasons for integrating them in outcome studies. Separate sections describe examples of moderating and mediating variables and the simplest statistical model for investigating each variable. The strengths and limitations of incorporating mediating…

  13. Force-Sensing Enhanced Simulation Environment (ForSense) for laparoscopic surgery training and assessment.

    PubMed

    Cundy, Thomas P; Thangaraj, Evelyn; Rafii-Tari, Hedyeh; Payne, Christopher J; Azzie, Georges; Sodergren, Mikael H; Yang, Guang-Zhong; Darzi, Ara

    2015-04-01

    Excessive or inappropriate tissue interaction force during laparoscopic surgery is a recognized contributor to surgical error, especially for robotic surgery. Measurement of force at the tool-tissue interface is, therefore, a clinically relevant skill assessment variable that may improve effectiveness of surgical simulation. Popular box trainer simulators lack the necessary technology to measure force. The aim of this study was to develop a force sensing unit that may be integrated easily with existing box trainer simulators and to (1) validate multiple force variables as objective measurements of laparoscopic skill, and (2) determine concurrent validity of a revised scoring metric. A base plate unit sensitized to a force transducer was retrofitted to a box trainer. Participants of 3 different levels of operative experience performed 5 repetitions of a peg transfer and suture task. Multiple outcome variables of force were assessed as well as a revised scoring metric that incorporated a penalty for force error. Mean, maximum, and overall magnitudes of force were significantly different among the 3 levels of experience, as well as force error. Experts were found to exert the least force and fastest task completion times, and vice versa for novices. Overall magnitude of force was the variable most correlated with experience level and task completion time. The revised scoring metric had similar predictive strength for experience level compared with the standard scoring metric. Current box trainer simulators can be adapted for enhanced objective measurements of skill involving force sensing. These outcomes are significantly influenced by level of expertise and are relevant to operative safety in laparoscopic surgery. Conventional proficiency standards that focus predominantly on task completion time may be integrated with force-based outcomes to be more accurately reflective of skill quality. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. An energy-stable method for solving the incompressible Navier-Stokes equations with non-slip boundary condition

    NASA Astrophysics Data System (ADS)

    Lee, Byungjoon; Min, Chohong

    2018-05-01

    We introduce a stable method for solving the incompressible Navier-Stokes equations with variable density and viscosity. Our method is stable in the sense that it does not increase the total energy of dynamics that is the sum of kinetic energy and potential energy. Instead of velocity, a new state variable is taken so that the kinetic energy is formulated by the L2 norm of the new variable. Navier-Stokes equations are rephrased with respect to the new variable, and a stable time discretization for the rephrased equations is presented. Taking into consideration the incompressibility in the Marker-And-Cell (MAC) grid, we present a modified Lax-Friedrich method that is L2 stable. Utilizing the discrete integration-by-parts in MAC grid and the modified Lax-Friedrich method, the time discretization is fully discretized. An explicit CFL condition for the stability of the full discretization is given and mathematically proved.

  15. Decadal variability of the Tropical Atlantic Ocean Surface Temperature in shipboard measurements and in a Global Ocean-Atmosphere model

    NASA Technical Reports Server (NTRS)

    Mehta, Vikram M.; Delworth, Thomas

    1995-01-01

    Sea surface temperature (SST) variability was investigated in a 200-yr integration of a global model of the coupled oceanic and atmospheric general circulations developed at the Geophysical Fluid Dynamics Laboratory (GFDL). The second 100 yr of SST in the coupled model's tropical Atlantic region were analyzed with a variety of techniques. Analyses of SST time series, averaged over approximately the same subregions as the Global Ocean Surface Temperature Atlas (GOSTA) time series, showed that the GFDL SST anomalies also undergo pronounced quasi-oscillatory decadal and multidecadal variability but at somewhat shorter timescales than the GOSTA SST anomalies. Further analyses of the horizontal structures of the decadal timescale variability in the GFDL coupled model showed the existence of two types of variability in general agreement with results of the GOSTA SST time series analyses. One type, characterized by timescales between 8 and 11 yr, has high spatial coherence within each hemisphere but not between the two hemispheres of the tropical Atlantic. A second type, characterized by timescales between 12 and 20 yr, has high spatial coherence between the two hemispheres. The second type of variability is considerably weaker than the first. As in the GOSTA time series, the multidecadal variability in the GFDL SST time series has approximately opposite phases between the tropical North and South Atlantic Oceans. Empirical orthogonal function analyses of the tropical Atlantic SST anomalies revealed a north-south bipolar pattern as the dominant pattern of decadal variability. It is suggested that the bipolar pattern can be interpreted as decadal variability of the interhemispheric gradient of SST anomalies. The decadal and multidecadal timescale variability of the tropical Atlantic SST, both in the actual and in the GFDL model, stands out significantly above the background 'red noise' and is coherent within each of the time series, suggesting that specific sets of processes may be responsible for the choice of the decadal and multidecadal timescales. Finally, it must be emphasized that the GFDL coupled ocean-atmosphere model generates the decadal and multidecadal timescale variability without any externally applied force, solar or lunar, at those timescales.

  16. Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR

    USGS Publications Warehouse

    Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan

    2010-01-01

    Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.

  17. Variable solar irradiance as a plausible agent for multidecadal variations in the Arctic-wide surface air temperature record of the past 130 years

    NASA Astrophysics Data System (ADS)

    Soon, Willie W.-H.

    2005-08-01

    This letter offers new evidence motivating a more serious consideration of the potential Arctic temperature responses as a consequence of the decadal, multidecadal and longer-term persistent forcing by the ever-changing solar irradiance both in terms of total solar irradiance (TSI, i.e., integrated over all wavelengths) and the related UV irradiance. The support for such a solar modulator can be minimally derived from the large (>75%) explained variance for the decadally-smoothed Arctic surface air temperatures (SATs) by TSI and from the time-frequency structures of the TSI and Arctic SAT variability as examined by wavelet analyses. The reconstructed Arctic SAT time series based on the inverse wavelet transform, which includes decadal (5-15 years) and multidecadal (40-80 years) variations and a longer-term trend, contains nonstationary but persistent features that are highly correlated with the Sun's intrinsic magnetic variability especially on multidecadal time scales.

  18. Integrated Response Time Evaluation Methodology for the Nuclear Safety Instrumentation System

    NASA Astrophysics Data System (ADS)

    Lee, Chang Jae; Yun, Jae Hee

    2017-06-01

    Safety analysis for a nuclear power plant establishes not only an analytical limit (AL) in terms of a measured or calculated variable but also an analytical response time (ART) required to complete protective action after the AL is reached. If the two constraints are met, the safety limit selected to maintain the integrity of physical barriers used for preventing uncontrolled radioactivity release will not be exceeded during anticipated operational occurrences and postulated accidents. Setpoint determination methodologies have actively been developed to ensure that the protective action is initiated before the process conditions reach the AL. However, regarding the ART for a nuclear safety instrumentation system, an integrated evaluation methodology considering the whole design process has not been systematically studied. In order to assure the safety of nuclear power plants, this paper proposes a systematic and integrated response time evaluation methodology that covers safety analyses, system designs, response time analyses, and response time tests. This methodology is applied to safety instrumentation systems for the advanced power reactor 1400 and the optimized power reactor 1000 nuclear power plants in South Korea. The quantitative evaluation results are provided herein. The evaluation results using the proposed methodology demonstrate that the nuclear safety instrumentation systems fully satisfy corresponding requirements of the ART.

  19. Industrial implementation of spatial variability control by real-time SPC

    NASA Astrophysics Data System (ADS)

    Roule, O.; Pasqualini, F.; Borde, M.

    2016-10-01

    Advanced technology nodes require more and more information to get the wafer process well setup. The critical dimension of components decreases following Moore's law. At the same time, the intra-wafer dispersion linked to the spatial non-uniformity of tool's processes is not capable to decrease in the same proportions. APC systems (Advanced Process Control) are being developed in waferfab to automatically adjust and tune wafer processing, based on a lot of process context information. It can generate and monitor complex intrawafer process profile corrections between different process steps. It leads us to put under control the spatial variability, in real time by our SPC system (Statistical Process Control). This paper will outline the architecture of an integrated process control system for shape monitoring in 3D, implemented in waferfab.

  20. An Integrated Approach to Winds, Jets, and State Transitions

    NASA Astrophysics Data System (ADS)

    Neilsen, Joseph

    2017-09-01

    We propose a large multiwavelength campaign (120 ks Chandra HETGS, NuSTAR, INTEGRAL, JVLA/ATCA, Swift, XMM, Gemini) on a black hole transient to study the influence of ionized winds on relativistic jets and state transitions. With a reimagined observing strategy based on new results on integrated RMS variability and a decade of radio/X-ray monitoring, we will search for winds during and after the state transition to test their influence on and track their coevolution with the disk and the jet over the next 2-3 months. Our spectral and timing constraints will provide precise probes of the accretion geometry and accretion/ejection physics.

  1. Jovian Northern Ethane Aurora and the Solar Cycle

    NASA Technical Reports Server (NTRS)

    Kostiuk,T.; Livengood, T.; Fast, K.; Buhl, D.; Goldstein, J.; Hewagama, T.

    1999-01-01

    Thermal infrared auroral spectra from Jupiter's North polar region have been collected from 1979 to 1998 in a continuing study of long-term variability in the northern thermal IR aurora, using C2H6 emission lines near 12 microns as a probe. Data from Voyager I and 2 IRIS measurements and ground based spectral measurements were analyzed using the same model atmosphere to provide a consistent relative comparison. A retrieved equivalent mole fraction was used to compare the observed integrated emission. Short term (days), medium term (months) and long term (years) variability in the ethane emission was observed. The variability Of C2H6 emission intensities was compared to Jupiter's seasonal cycle and the solar activity cycle. A positive correlation appears to exist, with significantly greater emission and short term variability during solar maxima. Observations on 60 N latitude during increased solar activity in 1979, 1989, and most recently in 1998 show up to 5 times brighter integrated line emission of C2H6 near the north polar "hot spot" (150-210 latitude) than from the north quiescent region. Significantly lower enhancement was observed during periods of lower solar activity in 1982, 1983, 1993, and 1995. Possible sources and mechanisms for the enhancement and variability will be discussed.

  2. Enhancing Access to and Use of NASA Earth Sciences Data via CUAHSI-HIS (Hydrologic Information System) and Other Hydrologic Community Tools

    NASA Astrophysics Data System (ADS)

    Rui, H.; Strub, R.; Teng, W. L.; Vollmer, B.; Mocko, D. M.; Maidment, D. R.; Whiteaker, T. L.

    2013-12-01

    The way NASA earth sciences data are typically archived (by time steps, one step per file, often containing multiple variables) is not optimal for their access by the hydrologic community, particularly if the data volume and/or number of data files are large. To enhance the access to and use of these NASA data, the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) adopted two approaches, in a project supported by the NASA ACCESS Program. The first is to optimally reorganize two large hydrological data sets for more efficient access, as time series, and to integrate the time series data (aka 'data rods') into hydrologic community tools, such as CUAHSI-HIS, EPA-BASINS, and Esri-ArcGIS. This effort has thus far resulted in the reorganization and archive (as data rods) of the following variables from the North American and Global Land Data Assimilation Systems (NLDAS and GLDAS, respectively): precipitation, soil moisture, evapotranspiration, runoff, near-surface specific humidity, potential evaporation, soil temperature, near surface air temperature, and near-surface wind. The second approach is to leverage the NASA Simple Subset Wizard (SSW), which was developed to unite data search and subsetters at various NASA EOSDIS data centers into a single, simple, seamless process. Data accessed via SSW are converted to time series before being made available via Web service. Leveraging SSW makes all data accessible via SSW potentially available to HIS users, which increases the number of data sets available as time series beyond those available as data rods. Thus far, a set of selected variables from the NASA Modern Era-Retrospective Analysis for Research and Applications Land Surface (MERRA-Land) data set has been integrated into CUAHSI-HIS, including evaporation, land surface temperature, runoff, soil moisture, soil temperature, precipitation, and transpiration. All data integration into these tools has been conducted in collaboration with their respective communities. Specifically, the GES DISC worked closely with the University of Texas (also part of the NASA ACCESS project) to seamlessly integrate these hydrology-related variables into CUAHSI-HIS. With NLDAS, GLDAS, and MERRA data integrated into CUAHSI-HIS, the data can be accessed via HydroDesktop (a windows-based GIS application) along with other existing HIS data, and analyzed with the built-in functions for water-cycle-related applications, research, and data validation. Case studies will be presented on the access to and use of NLDAS, GLDAS, and MERRA data for drought monitoring, Probable Maximum Precipitation (PMP), hurricane rainfall effects on soil moisture and runoff, as well as data inter-comparison. An example of GLDAS in ArcGIS Online, World Soil Moisture, will also be given. Featured with the long time series of GLDAS soil moisture data and powered by Esri-ArcGIS, the World Soil Moisture server allows users to click on any location in the world to view its soil moisture in ASCII or as a time series plot. Full records of the NLDAS, GLDAS, and MERRA data are accessible from NASA GES DISC via Mirador (http://mirador.gsfc.nasa.gov/), SSW (http://disc.sci.gsfc.nasa.gov/SSW/), Giovanni (http://disc.sci.gsfc.nasa.gov/giovanni/overview), OPeNDAP/GDS (http://disc.sci.gsfc.nasa.gov/services), as well as direct FTP.

  3. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Vector analogues of the Maggi-Rubinowicz theory of edge diffraction

    NASA Astrophysics Data System (ADS)

    Meneghini, R.; Shu, P.; Bay, J.

    1980-10-01

    The Maggi-Rubinowicz technique for scalar and electromagnetic fields is interpreted as a transformation of an integral over an open surface to a line integral around its rim. Maggi-Rubinowicz analogues are found for several vector physical optics representations. For diffraction from a circular aperture, a numerical comparison between these formulations shows the two methods are in agreement. To circumvent certain convergence difficulties in the Maggi-Rubinowicz integrals that occur as the observer approaches the shadow boundary, a variable mesh integration is used. For the examples considered, where the ratio of the aperture diameter to wavelength is about ten, the Maggi-Rubinowicz formulation yields an 8 to 10 fold decrease in computation time relative to the physical optics formulation.

  5. Vector analogues of the Maggi-Rubinowicz theory of edge diffraction

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Shu, P.; Bay, J.

    1980-01-01

    The Maggi-Rubinowicz technique for scalar and electromagnetic fields is interpreted as a transformation of an integral over an open surface to a line integral around its rim. Maggi-Rubinowicz analogues are found for several vector physical optics representations. For diffraction from a circular aperture, a numerical comparison between these formulations shows the two methods are in agreement. To circumvent certain convergence difficulties in the Maggi-Rubinowicz integrals that occur as the observer approaches the shadow boundary, a variable mesh integration is used. For the examples considered, where the ratio of the aperture diameter to wavelength is about ten, the Maggi-Rubinowicz formulation yields an 8 to 10 fold decrease in computation time relative to the physical optics formulation.

  6. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  7. Sleep variability in adolescence is associated with altered brain development.

    PubMed

    Telzer, Eva H; Goldenberg, Diane; Fuligni, Andrew J; Lieberman, Matthew D; Gálvan, Adriana

    2015-08-01

    Despite the known importance of sleep for brain development, and the sharp increase in poor sleep during adolescence, we know relatively little about how sleep impacts the developing brain. We present the first longitudinal study to examine how sleep during adolescence is associated with white matter integrity. We find that greater variability in sleep duration one year prior to a DTI scan is associated with lower white matter integrity above and beyond the effects of sleep duration, and variability in bedtime, whereas sleep variability a few months prior to the scan is not associated with white matter integrity. Thus, variability in sleep duration during adolescence may have long-term impairments on the developing brain. White matter integrity should be increasing during adolescence, and so sleep variability is directly at odds with normative developmental trends. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  9. Merging National Forest and National Forest Health Inventories to Obtain an Integrated Forest Resource Inventory – Experiences from Bavaria, Slovenia and Sweden

    PubMed Central

    Kovač, Marko; Bauer, Arthur; Ståhl, Göran

    2014-01-01

    Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120

  10. HIV promoter integration site primarily modulates transcriptional burst size rather than frequency.

    PubMed

    Skupsky, Ron; Burnett, John C; Foley, Jonathan E; Schaffer, David V; Arkin, Adam P

    2010-09-30

    Mammalian gene expression patterns, and their variability across populations of cells, are regulated by factors specific to each gene in concert with its surrounding cellular and genomic environment. Lentiviruses such as HIV integrate their genomes into semi-random genomic locations in the cells they infect, and the resulting viral gene expression provides a natural system to dissect the contributions of genomic environment to transcriptional regulation. Previously, we showed that expression heterogeneity and its modulation by specific host factors at HIV integration sites are key determinants of infected-cell fate and a possible source of latent infections. Here, we assess the integration context dependence of expression heterogeneity from diverse single integrations of a HIV-promoter/GFP-reporter cassette in Jurkat T-cells. Systematically fitting a stochastic model of gene expression to our data reveals an underlying transcriptional dynamic, by which multiple transcripts are produced during short, infrequent bursts, that quantitatively accounts for the wide, highly skewed protein expression distributions observed in each of our clonal cell populations. Interestingly, we find that the size of transcriptional bursts is the primary systematic covariate over integration sites, varying from a few to tens of transcripts across integration sites, and correlating well with mean expression. In contrast, burst frequencies are scattered about a typical value of several per cell-division time and demonstrate little correlation with the clonal means. This pattern of modulation generates consistently noisy distributions over the sampled integration positions, with large expression variability relative to the mean maintained even for the most productive integrations, and could contribute to specifying heterogeneous, integration-site-dependent viral production patterns in HIV-infected cells. Genomic environment thus emerges as a significant control parameter for gene expression variation that may contribute to structuring mammalian genomes, as well as be exploited for survival by integrating viruses.

  11. Relationship between the Arctic oscillation and surface air temperature in multi-decadal time-scale

    NASA Astrophysics Data System (ADS)

    Tanaka, Hiroshi L.; Tamura, Mina

    2016-09-01

    In this study, a simple energy balance model (EBM) was integrated in time, considering a hypothetical long-term variability in ice-albedo feedback mimicking the observed multi-decadal temperature variability. A natural variability was superimposed on a linear warming trend due to the increasing radiative forcing of CO2. The result demonstrates that the superposition of the natural variability and the background linear trend can offset with each other to show the warming hiatus for some period. It is also stressed that the rapid warming during 1970-2000 can be explained by the superposition of the natural variability and the background linear trend at least within the simple model. The key process of the fluctuating planetary albedo in multi-decadal time scale is investigated using the JRA-55 reanalysis data. It is found that the planetary albedo increased for 1958-1970, decreased for 1970-2000, and increased for 2000-2012, as expected by the simple EBM experiments. The multi-decadal variability in the planetary albedo is compared with the time series of the AO mode and Barents Sea mode of surface air temperature. It is shown that the recent AO negative pattern showing warm Arctic and cold mid-latitudes is in good agreement with planetary albedo change indicating negative anomaly in high latitudes and positive anomaly in mid-latitudes. Moreover, the Barents Sea mode with the warm Barents Sea and cold mid-latitudes shows long-term variability similar to planetary albedo change. Although further studies are needed, the natural variabilities of both the AO mode and Barents Sea mode indicate some possible link to the planetary albedo as suggested by the simple EBM to cause the warming hiatus in recent years.

  12. Integration of active pauses and pattern of muscular activity during computer work.

    PubMed

    St-Onge, Nancy; Samani, Afshin; Madeleine, Pascal

    2017-09-01

    Submaximal isometric muscle contractions have been reported to increase variability of muscle activation during computer work; however, other types of active contractions may be more beneficial. Our objective was to determine which type of active pause vs. rest is more efficient in changing muscle activity pattern during a computer task. Asymptomatic regular computer users performed a standardised 20-min computer task four times, integrating a different type of pause: sub-maximal isometric contraction, dynamic contraction, postural exercise and rest. Surface electromyographic (SEMG) activity was recorded bilaterally from five neck/shoulder muscles. Root-mean-square decreased with isometric pauses in the cervical paraspinals, upper trapezius and middle trapezius, whereas it increased with rest. Variability in the pattern of muscular activity was not affected by any type of pause. Overall, no detrimental effects on the level of SEMG during active pauses were found suggesting that they could be implemented without a cost on activation level or variability. Practitioner Summary: We aimed to determine which type of active pause vs. rest is best in changing muscle activity pattern during a computer task. Asymptomatic computer users performed a standardised computer task integrating different types of pauses. Muscle activation decreased with isometric pauses in neck/shoulder muscles, suggesting their implementation during computer work.

  13. The interface between population and development models, plans and policies.

    PubMed

    Cohen, S I

    1989-01-01

    Scant attention has been given to integrating policy issues in population economics and development economics into more general frameworks. Reviewing the state of the art, this paper examines problems in incorporating population economics variables in development planning. Specifically, conceptual issues in defining population economics variables, modelling relationships between them, and operationalizing frameworks for decision making are explored with hopes of yielding tentative solutions. Several controversial policy issues affecting the development process are also examined in the closing section. 2 of these issues would be the social efficiency of interventions with fertility, and of resource allocations to human development. The effective combination between agriculture and industry in promoting and equitably distributing income growth among earning population groups is a 3rd issue of consideration. Finally, the paper looks at the optimal combination between transfer payments and provisions in kind in guaranteeing minimum consumption needs for poverty groups. Overall, the paper finds significant obstacles to refining the integration of population economics and development policy. Namely, integrating time and place dimensions in classifying people by activity, operationalizing population economics variable models to meet the practical situations of planning and programs, and assessing conflicts and complementarities between alternative policies pose problems. 2 scholarly comments follow the main body of the paper.

  14. Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.

    PubMed

    Hele, Timothy J H; Ananth, Nandini

    2016-12-22

    We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.

  15. Electronic Thermometer Readings

    NASA Technical Reports Server (NTRS)

    2001-01-01

    NASA Stennis' adaptive predictive algorithm for electronic thermometers uses sample readings during the initial rise in temperature and applies an algorithm that accurately and rapidly predicts the steady state temperature. The final steady state temperature of an object can be calculated based on the second-order logarithm of the temperature signals acquired by the sensor and predetermined variables from the sensor characteristics. These variables are calculated during tests of the sensor. Once the variables are determined, relatively little data acquisition and data processing time by the algorithm is required to provide a near-accurate approximation of the final temperature. This reduces the delay in the steady state response time of a temperature sensor. This advanced algorithm can be implemented in existing software or hardware with an erasable programmable read-only memory (EPROM). The capability for easy integration eliminates the expense of developing a whole new system that offers the benefits provided by NASA Stennis' technology.

  16. On the comparison of the strength of morphological integration across morphometric datasets.

    PubMed

    Adams, Dean C; Collyer, Michael L

    2016-11-01

    Evolutionary morphologists frequently wish to understand the extent to which organisms are integrated, and whether the strength of morphological integration among subsets of phenotypic variables differ among taxa or other groups. However, comparisons of the strength of integration across datasets are difficult, in part because the summary measures that characterize these patterns (RV coefficient and r PLS ) are dependent both on sample size and on the number of variables. As a solution to this issue, we propose a standardized test statistic (a z-score) for measuring the degree of morphological integration between sets of variables. The approach is based on a partial least squares analysis of trait covariation, and its permutation-based sampling distribution. Under the null hypothesis of a random association of variables, the method displays a constant expected value and confidence intervals for datasets of differing sample sizes and variable number, thereby providing a consistent measure of integration suitable for comparisons across datasets. A two-sample test is also proposed to statistically determine whether levels of integration differ between datasets, and an empirical example examining cranial shape integration in Mediterranean wall lizards illustrates its use. Some extensions of the procedure are also discussed. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  17. Temperature variability is integrated by a spatially embedded decision-making center to break dormancy in Arabidopsis seeds.

    PubMed

    Topham, Alexander T; Taylor, Rachel E; Yan, Dawei; Nambara, Eiji; Johnston, Iain G; Bassel, George W

    2017-06-20

    Plants perceive and integrate information from the environment to time critical transitions in their life cycle. Some mechanisms underlying this quantitative signal processing have been described, whereas others await discovery. Seeds have evolved a mechanism to integrate environmental information by regulating the abundance of the antagonistically acting hormones abscisic acid (ABA) and gibberellin (GA). Here, we show that hormone metabolic interactions and their feedbacks are sufficient to create a bistable developmental fate switch in Arabidopsis seeds. A digital single-cell atlas mapping the distribution of hormone metabolic and response components revealed their enrichment within the embryonic radicle, identifying the presence of a decision-making center within dormant seeds. The responses to both GA and ABA were found to occur within distinct cell types, suggesting cross-talk occurs at the level of hormone transport between these signaling centers. We describe theoretically, and demonstrate experimentally, that this spatial separation within the decision-making center is required to process variable temperature inputs from the environment to promote the breaking of dormancy. In contrast to other noise-filtering systems, including human neurons, the functional role of this spatial embedding is to leverage variability in temperature to transduce a fate-switching signal within this biological system. Fluctuating inputs therefore act as an instructive signal for seeds, enhancing the accuracy with which plants are established in ecosystems, and distributed computation within the radicle underlies this signal integration mechanism.

  18. Temperature variability is integrated by a spatially embedded decision-making center to break dormancy in Arabidopsis seeds

    PubMed Central

    Topham, Alexander T.; Taylor, Rachel E.; Yan, Dawei; Nambara, Eiji; Johnston, Iain G.

    2017-01-01

    Plants perceive and integrate information from the environment to time critical transitions in their life cycle. Some mechanisms underlying this quantitative signal processing have been described, whereas others await discovery. Seeds have evolved a mechanism to integrate environmental information by regulating the abundance of the antagonistically acting hormones abscisic acid (ABA) and gibberellin (GA). Here, we show that hormone metabolic interactions and their feedbacks are sufficient to create a bistable developmental fate switch in Arabidopsis seeds. A digital single-cell atlas mapping the distribution of hormone metabolic and response components revealed their enrichment within the embryonic radicle, identifying the presence of a decision-making center within dormant seeds. The responses to both GA and ABA were found to occur within distinct cell types, suggesting cross-talk occurs at the level of hormone transport between these signaling centers. We describe theoretically, and demonstrate experimentally, that this spatial separation within the decision-making center is required to process variable temperature inputs from the environment to promote the breaking of dormancy. In contrast to other noise-filtering systems, including human neurons, the functional role of this spatial embedding is to leverage variability in temperature to transduce a fate-switching signal within this biological system. Fluctuating inputs therefore act as an instructive signal for seeds, enhancing the accuracy with which plants are established in ecosystems, and distributed computation within the radicle underlies this signal integration mechanism. PMID:28584126

  19. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  20. An optimal policy for a single-vendor and a single-buyer integrated system with setup cost reduction and process-quality improvement

    NASA Astrophysics Data System (ADS)

    Shu, Hui; Zhou, Xideng

    2014-05-01

    The single-vendor single-buyer integrated production inventory system has been an object of study for a long time, but little is known about the effect of investing in reducing setup cost reduction and process-quality improvement for an integrated inventory system in which the products are sold with free minimal repair warranty. The purpose of this article is to minimise the integrated cost by optimising simultaneously the number of shipments and the shipment quantity, the setup cost, and the process quality. An efficient algorithm procedure is proposed for determining the optimal decision variables. A numerical example is presented to illustrate the results of the proposed models graphically. Sensitivity analysis of the model with respect to key parameters of the system is carried out. The paper shows that the proposed integrated model can result in significant savings in the integrated cost.

  1. Ocean time-series near Bermuda: Hydrostation S and the US JGOFS Bermuda Atlantic time-series study

    NASA Technical Reports Server (NTRS)

    Michaels, Anthony F.; Knap, Anthony H.

    1992-01-01

    Bermuda is the site of two ocean time-series programs. At Hydrostation S, the ongoing biweekly profiles of temperature, salinity and oxygen now span 37 years. This is one of the longest open-ocean time-series data sets and provides a view of decadal scale variability in ocean processes. In 1988, the U.S. JGOFS Bermuda Atlantic Time-series Study began a wide range of measurements at a frequency of 14-18 cruises each year to understand temporal variability in ocean biogeochemistry. On each cruise, the data range from chemical analyses of discrete water samples to data from electronic packages of hydrographic and optics sensors. In addition, a range of biological and geochemical rate measurements are conducted that integrate over time-periods of minutes to days. This sampling strategy yields a reasonable resolution of the major seasonal patterns and of decadal scale variability. The Sargasso Sea also has a variety of episodic production events on scales of days to weeks and these are only poorly resolved. In addition, there is a substantial amount of mesoscale variability in this region and some of the perceived temporal patterns are caused by the intersection of the biweekly sampling with the natural spatial variability. In the Bermuda time-series programs, we have added a series of additional cruises to begin to assess these other sources of variation and their impacts on the interpretation of the main time-series record. However, the adequate resolution of higher frequency temporal patterns will probably require the introduction of new sampling strategies and some emerging technologies such as biogeochemical moorings and autonomous underwater vehicles.

  2. Distinguishing the Forest from the Trees: Synthesizing IHRMP Research

    Treesearch

    Gregory B. Greenwood

    1991-01-01

    A conceptual model of hardwood rangelands as multi-output resource system is developed and used to achieve a synthesis of Integrated Hardwood Range Management Program (IHRMP) research. The model requires the definition of state variables which characterize the system at any time, processes that move the system to different states, outputs...

  3. Estimating the Growth of Internal Evidence Guiding Perceptual Decisions

    ERIC Educational Resources Information Center

    Ludwig, Casimir J. H.; Davies, J. Rhys

    2011-01-01

    Perceptual decision-making is thought to involve a gradual accrual of noisy evidence. Temporal integration of the evidence reduces the relative contribution of dynamic internal noise to the decision variable, thereby boosting its signal-to-noise ratio. We aimed to estimate the internal evidence guiding perceptual decisions over time, using a novel…

  4. Integrating remote sensing, GIS and dynamic models for landscape-level simulation of forest insect disturbance

    USDA-ARS?s Scientific Manuscript database

    Cellular automata (CA) is a powerful tool in modeling the evolution of macroscopic scale phenomena as it couples time, space, and variable together while remaining in a simplified form. However, such application has remained challenging in landscape-level chronic forest insect epidemics due to the h...

  5. Impact of environmental chemicals, sociodemographic variables, depression, and clinical indicators of health and nutrition on self-reported health status

    EPA Science Inventory

    Public health researchers ideally integrate social, environmental, and clinical measures to identify predictors of poor health. Chemicals measured in human tissues are often evaluated in relation to intangible or rare health outcomes, or are studied one chemical at a time. Using ...

  6. Continuous-time ΣΔ ADC with implicit variable gain amplifier for CMOS image sensor.

    PubMed

    Tang, Fang; Bermak, Amine; Abbes, Amira; Benammar, Mohieddine Amor

    2014-01-01

    This paper presents a column-parallel continuous-time sigma delta (CTSD) ADC for mega-pixel resolution CMOS image sensor (CIS). The sigma delta modulator is implemented with a 2nd order resistor/capacitor-based loop filter. The first integrator uses a conventional operational transconductance amplifier (OTA), for the concern of a high power noise rejection. The second integrator is realized with a single-ended inverter-based amplifier, instead of a standard OTA. As a result, the power consumption is reduced, without sacrificing the noise performance. Moreover, the variable gain amplifier in the traditional column-parallel read-out circuit is merged into the front-end of the CTSD modulator. By programming the input resistance, the amplitude range of the input current can be tuned with 8 scales, which is equivalent to a traditional 2-bit preamplification function without consuming extra power and chip area. The test chip prototype is fabricated using 0.18 μm CMOS process and the measurement result shows an ADC power consumption lower than 63.5 μW under 1.4 V power supply and 50 MHz clock frequency.

  7. On the energy integral formulation of gravitational potential differences from satellite-to-satellite tracking

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Shang, K.; Jekeli, C.; Shum, C. K.

    2015-04-01

    Two approaches have been formulated to compute the gravitational potential difference using low-low satellite-to-satellite tracking data based on energy integral: one in the geocentric inertial reference system, and the other in the terrestrial reference system. The focus of this work is on the approach in the geocentric inertial reference system, where a potential rotation term appears in addition to the potential term. In former formulations, the contribution of the time-variable components of the gravitational potential to the potential term was included, but their contribution to the potential rotation term was neglected. In this work, an improvement to the former formulations is made by reformulating the potential rotation term to include the contribution of the time-variable components of the gravitational potential. A simulation shows that our more accurate formulation of the potential rotation term is necessary to achieve the accuracy for recovering the temporal variation of the Earth's gravity field, such as for use to the Gravity Recovery And Climate Experiment GRACE observation data based on this approach.

  8. Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, G.; Chopra, A.K.; Severson, C.D.

    1997-12-01

    Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less

  9. Climate variability and human impact on the environment in South America during the last 2000 years: synthesis and perspectives

    NASA Astrophysics Data System (ADS)

    Flantua, S. G. A.; Hooghiemstra, H.; Vuille, M.; Behling, H.; Carson, J. F.; Gosling, W. D.; Hoyos, I.; Ledru, M. P.; Montoya, E.; Mayle, F.; Maldonado, A.; Rull, V.; Tonello, M. S.; Whitney, B. S.; González-Arango, C.

    2015-07-01

    An improved understanding of present-day climate variability and change relies on high-quality data sets from the past two millennia. Global efforts to reconstruct regional climate modes are in the process of validating and integrating paleo-proxies. For South America, however, the full potential of vegetation records for evaluating and improving climate models has hitherto not been sufficiently acknowledged due to its unknown spatial and temporal coverage. This paper therefore serves as a guide to high-quality pollen records that capture environmental variability during the last two millennia. We identify the pollen records with the required temporal characteristics for PAGES-2 ka climate modelling and we discuss their sensitivity to the spatial signature of climate modes throughout the continent. Diverse patterns of vegetation response to climate change are observed, with more similar patterns of change in the lowlands and varying intensity and direction of responses in the highlands. Pollen records display local scale responses to climate modes, thus it is necessary to understand how vegetation-climate interactions might diverge under variable settings. Additionally, pollen is an excellent indicator of human impact through time. Evidence for human land use in pollen records is useful for archaeological hypothesis testing and important in distinguishing natural from anthropogenically driven vegetation change. We stress the need for the palynological community to be more familiar with climate variability patterns to correctly attribute the potential causes of observed vegetation dynamics. The LOTRED-SA-2 k initiative provides the ideal framework for the integration of the various paleoclimatic sub-disciplines and paleo-science, thereby jumpstarting and fostering multi-disciplinary research into environmental change on centennial and millennial time scales.

  10. Heart-Rate Variability-More than Heart Beats?

    PubMed

    Ernst, Gernot

    2017-01-01

    Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa . But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion-cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV.

  11. Altitude exposure in sports: the Athlete Biological Passport standpoint.

    PubMed

    Sanchis-Gomar, Fabian; Pareja-Galeano, Helios; Brioche, Thomas; Martinez-Bello, Vladimir; Lippi, Giuseppe

    2014-03-01

    The Athlete Biological Passport (ABP) is principally founded on monitoring an athlete's biological variables over time, to identify abnormal biases on a longitudinal basis. Several factors are known to influence the results of these markers. However, the manner in which the altitude factor is taken into account still needs to be standardized. Causal relationships between haematological variables should be correctly integrated into ABP software. In particular, modifications of haematological parameters during and after exposure to different altitudes/hypoxic protocols need to be properly included within detection models. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  13. Ubiquitous time variability of integrated stellar populations.

    PubMed

    Conroy, Charlie; van Dokkum, Pieter G; Choi, Jieun

    2015-11-26

    Long-period variable stars arise in the final stages of the asymptotic giant branch phase of stellar evolution. They have periods of up to about 1,000 days and amplitudes that can exceed a factor of three in the I-band flux. These stars pulsate predominantly in their fundamental mode, which is a function of mass and radius, and so the pulsation periods are sensitive to the age of the underlying stellar population. The overall number of long-period variables in a population is directly related to their lifetimes, which is difficult to predict from first principles because of uncertainties associated with stellar mass-loss and convective mixing. The time variability of these stars has not previously been taken into account when modelling the spectral energy distributions of galaxies. Here we construct time-dependent stellar population models that include the effects of long-period variable stars, and report the ubiquitous detection of this expected 'pixel shimmer' in the massive metal-rich galaxy M87. The pixel light curves display a variety of behaviours. The observed variation of 0.1 to 1 per cent is very well matched to the predictions of our models. The data provide a strong constraint on the properties of variable stars in an old and metal-rich stellar population, and we infer that the lifetime of long-period variables in M87 is shorter by approximately 30 per cent compared to predictions from the latest stellar evolution models.

  14. Investigation of clinical pharmacokinetic variability of an opioid antagonist through physiologically based absorption modeling.

    PubMed

    Ding, Xuan; He, Minxia; Kulkarni, Rajesh; Patel, Nita; Zhang, Xiaoyu

    2013-08-01

    Identifying the source of inter- and/or intrasubject variability in pharmacokinetics (PK) provides fundamental information in understanding the pharmacokinetics-pharmacodynamics relationship of a drug and project its efficacy and safety in clinical populations. This identification process can be challenging given that a large number of potential causes could lead to PK variability. Here we present an integrated approach of physiologically based absorption modeling to investigate the root cause of unexpectedly high PK variability of a Phase I clinical trial drug. LY2196044 exhibited high intersubject variability in the absorption phase of plasma concentration-time profiles in humans. This could not be explained by in vitro measurements of drug properties and excellent bioavailability with low variability observed in preclinical species. GastroPlus™ modeling suggested that the compound's optimal solubility and permeability characteristics would enable rapid and complete absorption in preclinical species and in humans. However, simulations of human plasma concentration-time profiles indicated that despite sufficient solubility and rapid dissolution of LY2196044 in humans, permeability and/or transit in the gastrointestinal (GI) tract may have been negatively affected. It was concluded that clinical PK variability was potentially due to the drug's antagonism on opioid receptors that affected its transit and absorption in the GI tract. Copyright © 2013 Wiley Periodicals, Inc.

  15. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  16. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  17. Effect of shortening the prefreezing equilibration time with glycerol on the quality of chamois (Rupicapra pyrenaica), ibex (Capra pyrenaica), mouflon (Ovis musimon) and aoudad (Ammotragus lervia) ejaculates.

    PubMed

    Pradiee, J; O'Brien, E; Esteso, M C; Castaño, C; Toledano-Díaz, A; Lopez-Sebastián, A; Marcos-Beltrán, J L; Vega, R S; Guillamón, F G; Martínez-Nevado, E; Guerra, R; Santiago-Moreno, J

    2016-08-01

    The present study reports the effect of shortening the prefreezing equilibration time with glycerol on the quality of frozen-thawed ejaculated sperm from four Mediterranean mountain ungulates: Cantabrian chamois (Rupicapra pyrenaica), Iberian ibex (Capra pyrenaica), mouflon (Ovis musimon) and aoudad (Ammotragus lervia). Ejaculated sperm from these species were divided into two aliquots. One was diluted with either a Tris-citric acid-glucose based medium (TCG-glycerol; for chamois and ibex sperm) or a Tris-TES-glucose-based medium (TTG-glycerol; for mouflon and aoudad sperm), and maintained at 5°C for 3h prior to freezing. The other aliquot was diluted with either TCG (chamois and ibex sperm) or TTG (mouflon and aoudad sperm) and maintained at 5°C for 1h before adding glycerol (final concentration 5%). After a 15min equilibration period in the presence of glycerol, the samples were frozen. For the ibex, there was enhanced (P<0.05) sperm viability and acrosome integrity after the 3h as compared with the 15min equilibration time. For the chamois, subjective sperm motility and cell membrane functional integrity were less (P<0.05) following 15min of equilibration. In the mouflon, progressive sperm motility and acrosome integrity was less (P<0.05) when the equilibration time was reduced to 15min. For the aoudad, the majority of sperm variables measured were more desirable after the 3h equilibration time. The freezing-thawing processes reduced the sperm head size in all the species studied; however, the equilibration time further affected the frozen-thawed sperm head variables in a species-dependent fashion. While the equilibration time for chamois sperm might be shortened, this appears not to be the case for all ungulates. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Flight-determined benefits of integrated flight-propulsion control systems

    NASA Technical Reports Server (NTRS)

    Stewart, James F.; Burcham, Frank W., Jr.; Gatlin, Donald H.

    1992-01-01

    Over the last two decades, NASA has conducted several experiments in integrated flight-propulsion control. Benefits have included improved maneuverability; increased thrust, range, and survivability; reduced fuel consumption; and reduced maintenance. This paper presents the basic concepts for control integration, examples of implementation, and benefits. The F-111E experiment integrated the engine and inlet control systems. The YF-12C incorporated an integral control system involving the inlet, autopilot, autothrottle, airdata, navigation, and stability augmentation systems. The F-15 research involved integration of the engine, flight, and inlet control systems. Further extension of the integration included real-time, onboard optimization of engine, inlet, and flight control variables; a self-repairing flight control system; and an engines-only control concept for emergency control. The F-18A aircraft incorporated thrust vectoring integrated with the flight control system to provide enhanced maneuvering at high angles of attack. The flight research programs and the resulting benefits of each program are described.

  19. Multilevel integrated flood management aproach

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Rusjan, Simon

    2013-04-01

    The optimal solution for complex flood management is integrated approach. Word »integration« used very often when we try to put something together, but should distinguish full multiple integrated approach of integration by parts when we put together and analyse only two variables. In doing so, we lost complexity of the phenomenon. Otherwise if we try to put together all variables we should take so much effort and time and we never finish the job properly. Solution is in multiple integration captures the essential factors, which are different on a case-by-case (Brilly, 2000). Physical planning is one of most important activity in which flood management should be integrated. The physical planning is crucial for vulnerability and its future development and on other hand our structural measures must be incorporate in space and will very often dominated in. The best solution is if space development derived on same time with development of structural measures. There are good examples with such approach (Vienna, Belgrade, Zagreb, and Ljubljana). Problems stared when we try incorporating flood management in already urbanised area or we would like to decrease risk to some lower level. Looking to practice we learn that middle Ages practices were much better than to day. There is also »disaster by design« when hazard increased as consequence of upstream development or in stream construction or remediation. In such situation we have risk on areas well protected in the past. Good preparation is essential for integration otherwise we just lost time what is essential for decision making and development. We should develop clear picture about physical characteristics of phenomena and possible solutions. We should develop not only the flood maps; we should know how fast phenomena could develop, in hour, day or more. Do we need to analyse ground water - surface water relations, we would like to protected area that was later flooded by ground water. Do we need to take care about sediment transport, phenomenon close related to floods - could the river bad bottom increase or decrease for some meters or river completely rearrange morphology - how then inundated area will look like. Hazard of floods should be presented properly, with maps, uncertainty and trends related to natural and anthropogenic impacts. We should look time back, how our river look in past centuries and what are water management plans for future. Which activities are on the river? There are good practice in flood protection, hydropower development and physical planning (Vienna, Sava River).

  20. Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes

    NASA Astrophysics Data System (ADS)

    Panchal, Hitesh; Awasthi, Anuradha

    2017-06-01

    In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.

  1. Photonic content-addressable memory system that uses a parallel-readout optical disk

    NASA Astrophysics Data System (ADS)

    Krishnamoorthy, Ashok V.; Marchand, Philippe J.; Yayla, Gökçe; Esener, Sadik C.

    1995-11-01

    We describe a high-performance associative-memory system that can be implemented by means of an optical disk modified for parallel readout and a custom-designed silicon integrated circuit with parallel optical input. The system can achieve associative recall on 128 \\times 128 bit images and also on variable-size subimages. The system's behavior and performance are evaluated on the basis of experimental results on a motionless-head parallel-readout optical-disk system, logic simulations of the very-large-scale integrated chip, and a software emulation of the overall system.

  2. Climate Dynamics and Experimental Prediction (CDEP) and Regional Integrated Science Assessments (RISA) Programs at NOAA Office of Global Programs

    NASA Astrophysics Data System (ADS)

    Bamzai, A.

    2003-04-01

    This talk will highlight science and application activities of the CDEP and RISA programs at NOAA OGP. CDEP, through a set of Applied Research Centers (ARCs), supports NOAA's program of quantitative assessments and predictions of global climate variability and its regional implications on time scales of seasons to centuries. The RISA program consolidates results from ongoing disciplinary process research under an integrative framework. Examples of joint CDEP-RISA activities will be presented. Future directions and programmatic challenges will also be discussed.

  3. [Sedentary leisure time and food consumption among Brazilian adolescents: the Brazilian National School-Based Adolescent Health Survey (PeNSE), 2009].

    PubMed

    Camelo, Lidyane do Valle; Rodrigues, Jôsi Fernandes de Castro; Giatti, Luana; Barreto, Sandhi Maria

    2012-11-01

    The objective of this paper was to investigate whether sedentary leisure time was associated with increased regular consumption of unhealthy foods, independently of socio-demographic indicators and family context. The analysis included 59,809 students from the Brazilian National School-Based Adolescent Health Survey (PeNSE) in 2009. The response variable was sedentary leisure time, defined as watching more than two hours of TV daily. The target explanatory variables were regular consumption of soft drinks, sweets, cookies, and processed meat. Odds ratios (OR) and 95% confidence limits (95%CI) were obtained by multiple logistic regression. Prevalence of sedentary leisure time was 65%. Regular consumption of unhealthy foods was statistically higher among students reporting sedentary leisure time, before and after adjusting for sex, age, skin color, school administration (public versus private), household assets index, and household composition. The results indicate the need for integrated interventions to promote healthy leisure-time activities and healthy eating habits among young people.

  4. Finite element computation of a viscous compressible free shear flow governed by the time dependent Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.; Blanchard, D. K.

    1975-01-01

    A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.

  5. Environmental Variability and Plankton Community Dynamics in the English Channel

    NASA Astrophysics Data System (ADS)

    Barton, A.; Gonzalez, F.; Atkinson, A.; Stock, C. A.

    2016-02-01

    Temporal environmental variation plays a key role in shaping plankton community structure and dynamics. In some cases, these ecological changes may be abrupt and long-lived, and constitute a significant change in overall ecosystem structure and function. The "Double Integration Hypothesis", posed recently by Di Lorenzo and Ohman to help explain these complex biophysical linkages, holds that atmospheric variability is filtered first through the ocean surface before secondarily imprinting on plankton communities. In this perspective, physical properties of the surface ocean, such as sea surface temperature (SST), integrate atmospheric white noise, resulting in a time series that is smoother and has more low than high frequency variability (red noise). Secondarily, long-lived zooplankton integrate over oceanographic conditions and further redden the power spectra. We test the generality of this hypothesis with extensive environmental and ecological data from the L4 station in the Western English Channel (1988-present), calculating power spectral slopes from anomaly time series for atmospheric forcing (wind stress and net heat fluxes), surface ocean conditions (SST and macronutrients), and the biomasses of well over 100 phytoplankton and zooplankton taxa. As expected, we find that SST and macronutrient concentrations are redder in character than white noise atmospheric forcing. However, we find that power spectral slopes for phytoplankton and zooplankton are generally not significantly less than found for oceanographic conditions. Moreover, we find a considerable range in power spectral slopes within the phytoplankton and zooplankton, reflecting the diversity of body sizes, traits, life histories, and predator-prey interactions. We interpret these findings using an idealized trait-based model with a single phytoplankton prey and zooplankton predator, configured to capture essential oceanographic properties at the L4 station, and discuss how changes in power spectral slope seen in the L4 time series are linked to predator-prey body size and generation length differences.

  6. The equations of motion of a secularly precessing elliptical orbit

    NASA Astrophysics Data System (ADS)

    Casotto, S.; Bardella, M.

    2013-01-01

    The equations of motion of a secularly precessing ellipse are developed using time as the independent variable. The equations are useful when integrating numerically the perturbations about a reference trajectory which is subject to secular perturbations in the node, the argument of pericentre and the mean motion. Usually this is done in connection with Encke's method to ensure minimal rectification frequency. Similar equations are already available in the literature, but they are either given based on the true anomaly as the independent variable or in mixed mode with respect to time through the use of a supporting equation to track the anomaly. The equations developed here form a complete and independent set of six equations in time. Reformulations both of Escobal's and Kyner and Bennett's equations are also provided which lead to a more concise form.

  7. Eating and rumination activity in 10 cows over 10 days.

    PubMed

    Braun, U; Zürcher, S; Hässig, M

    2015-08-01

    Eating and rumination activities were evaluated in 10 Brown Swiss cows over 10 days, and the coefficients of variation (CV) were calculated for the investigated variables. A pressure sensor integrated into the noseband of a halter recorded jaw movements during chewing, which allowed the recording of eating and rumination times and the number of regurgitated boluses. The mean CVs ranged from 5.9 to 12.7% and were smaller for rumination (chewing cycles per bolus, 5.9%; daily number of cuds, 8.4%; rumination time, 9.1%) than for eating (eating time, 12.0%; chewing cycles related to eating, 12.7%). We concluded that of eating and rumination variables examined, the number of chewing cycles per regurgitated bolus is the most robust with little variation in individual cows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. High-Performance Integrated Control of water quality and quantity in urban water reservoirs

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.; Goedbloed, A.

    2015-11-01

    This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).

  9. Variable self-powered light detection CMOS chip with real-time adaptive tracking digital output based on a novel on-chip sensor.

    PubMed

    Wang, HongYi; Fan, Youyou; Lu, Zhijian; Luo, Tao; Fu, Houqiang; Song, Hongjiang; Zhao, Yuji; Christen, Jennifer Blain

    2017-10-02

    This paper provides a solution for a self-powered light direction detection with digitized output. Light direction sensors, energy harvesting photodiodes, real-time adaptive tracking digital output unit and other necessary circuits are integrated on a single chip based on a standard 0.18 µm CMOS process. Light direction sensors proposed have an accuracy of 1.8 degree over a 120 degree range. In order to improve the accuracy, a compensation circuit is presented for photodiodes' forward currents. The actual measurement precision of output is approximately 7 ENOB. Besides that, an adaptive under voltage protection circuit is designed for variable supply power which may undulate with temperature and process.

  10. Quantum information processing with a travelling wave of light

    NASA Astrophysics Data System (ADS)

    Serikawa, Takahiro; Shiozawa, Yu; Ogawa, Hisashi; Takanashi, Naoto; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We exploit quantum information processing on a traveling wave of light, expecting emancipation from thermal noise, easy coupling to fiber communication, and potentially high operation speed. Although optical memories are technically challenging, we have an alternative approach to apply multi-step operations on traveling light, that is, continuous-variable one-way computation. So far our achievement includes generation of a one-million-mode entangled chain in time-domain, mode engineering of nonlinear resource states, and real-time nonlinear feedforward. Although they are implemented with free space optics, we are also investigating photonic integration and performed quantum teleportation with a passive liner waveguide chip as a demonstration of entangling, measurement, and feedforward. We also suggest a loop-based architecture as another model of continuous-variable computing.

  11. A computer software system for the generation of global ocean tides including self-gravitation and crustal loading effects

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1977-01-01

    A computer software system is described which computes global numerical solutions of the integro-differential Laplace tidal equations, including dissipation terms and ocean loading and self-gravitation effects, for arbitrary diurnal and semidiurnal tidal constituents. The integration algorithm features a successive approximation scheme for the integro-differential system, with time stepping forward differences in the time variable and central differences in spatial variables. Solutions for M2, S2, N2, K2, K1, O1, P1 tidal constituents neglecting the effects of ocean loading and self-gravitation and a converged M2, solution including ocean loading and self-gravitation effects are presented in the form of cotidal and corange maps.

  12. Integrated Cox's model for predicting survival time of glioblastoma multiforme.

    PubMed

    Ai, Zhibing; Li, Longti; Fu, Rui; Lu, Jing-Min; He, Jing-Dong; Li, Sen

    2017-04-01

    Glioblastoma multiforme is the most common primary brain tumor and is highly lethal. This study aims to figure out signatures for predicting the survival time of patients with glioblastoma multiforme. Clinical information, messenger RNA expression, microRNA expression, and single-nucleotide polymorphism array data of patients with glioblastoma multiforme were retrieved from The Cancer Genome Atlas. Patients were separated into two groups by using 1 year as a cutoff, and a logistic regression model was used to figure out any variables that can predict whether the patient was able to live longer than 1 year. Furthermore, Cox's model was used to find out features that were correlated with the survival time. Finally, a Cox model integrated the significant clinical variables, messenger RNA expression, microRNA expression, and single-nucleotide polymorphism was built. Although the classification method failed, signatures of clinical features, messenger RNA expression levels, and microRNA expression levels were figured out by using Cox's model. However, no single-nucleotide polymorphisms related to prognosis were found. The selected clinical features were age at initial diagnosis, Karnofsky score, and race, all of which had been suggested to correlate with survival time. Both of the two significant microRNAs, microRNA-221 and microRNA-222, were targeted to p27 Kip1 protein, which implied the important role of p27 Kip1 on the prognosis of glioblastoma multiforme patients. Our results suggested that survival modeling was more suitable than classification to figure out prognostic biomarkers for patients with glioblastoma multiforme. An integrated model containing clinical features, messenger RNA levels, and microRNA expression levels was built, which has the potential to be used in clinics and thus to improve the survival status of glioblastoma multiforme patients.

  13. Integrating Near-Real Time Hydrologic-Response Monitoring and Modeling for Improved Assessments of Slope Stability Along the Coastal Bluffs of the Puget Sound Rail Corridor, Washington State

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.

    2015-12-01

    Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.

  14. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  15. Mars dust storms - Interannual variability and chaos

    NASA Technical Reports Server (NTRS)

    Ingersoll, Andrew P.; Lyons, James R.

    1993-01-01

    The hypothesis is that the global climate system, consisting of atmospheric dust interacting with the circulation, produces its own interannual variability when forced at the annual frequency. The model has two time-dependent variables representing the amount of atmospheric dust in the northern and southern hemispheres, respectively. Absorption of sunlight by the dust drives a cross-equatorial Hadley cell that brings more dust into the heated hemisphere. The circulation decays when the dust storm covers the globe. Interannual variability manifests itself either as a periodic solution in which the period is a multiple of the Martian year, or as an aperiodic (chaotic) solution that never repeats. Both kinds of solution are found in the model, lending support to the idea that interannual variability is an intrinsic property of the global climate system. The next step is to develop a hierarchy of dust-circulation models capable of being integrated for many years.

  16. Evaluation of the Xeon phi processor as a technology for the acceleration of real-time control in high-order adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine

    2014-08-01

    We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.

  17. Weather variability and the incidence of cryptosporidiosis: comparison of time series poisson regression and SARIMA models.

    PubMed

    Hu, Wenbiao; Tong, Shilu; Mengersen, Kerrie; Connell, Des

    2007-09-01

    Few studies have examined the relationship between weather variables and cryptosporidiosis in Australia. This paper examines the potential impact of weather variability on the transmission of cryptosporidiosis and explores the possibility of developing an empirical forecast system. Data on weather variables, notified cryptosporidiosis cases, and population size in Brisbane were supplied by the Australian Bureau of Meteorology, Queensland Department of Health, and Australian Bureau of Statistics for the period of January 1, 1996-December 31, 2004, respectively. Time series Poisson regression and seasonal auto-regression integrated moving average (SARIMA) models were performed to examine the potential impact of weather variability on the transmission of cryptosporidiosis. Both the time series Poisson regression and SARIMA models show that seasonal and monthly maximum temperature at a prior moving average of 1 and 3 months were significantly associated with cryptosporidiosis disease. It suggests that there may be 50 more cases a year for an increase of 1 degrees C maximum temperature on average in Brisbane. Model assessments indicated that the SARIMA model had better predictive ability than the Poisson regression model (SARIMA: root mean square error (RMSE): 0.40, Akaike information criterion (AIC): -12.53; Poisson regression: RMSE: 0.54, AIC: -2.84). Furthermore, the analysis of residuals shows that the time series Poisson regression appeared to violate a modeling assumption, in that residual autocorrelation persisted. The results of this study suggest that weather variability (particularly maximum temperature) may have played a significant role in the transmission of cryptosporidiosis. A SARIMA model may be a better predictive model than a Poisson regression model in the assessment of the relationship between weather variability and the incidence of cryptosporidiosis.

  18. eClims: An Extensible and Dynamic Integration Framework for Biomedical Information Systems.

    PubMed

    Savonnet, Marinette; Leclercq, Eric; Naubourg, Pierre

    2016-11-01

    Biomedical information systems (BIS) require consideration of three types of variability: data variability induced by new high throughput technologies, schema or model variability induced by large scale studies or new fields of research, and knowledge variability resulting from new discoveries. Beyond data heterogeneity, managing variabilities in the context of BIS requires extensible and dynamic integration process. In this paper, we focus on data and schema variabilities and we propose an integration framework based on ontologies, master data, and semantic annotations. The framework addresses issues related to: 1) collaborative work through a dynamic integration process; 2) variability among studies using an annotation mechanism; and 3) quality control over data and semantic annotations. Our approach relies on two levels of knowledge: BIS-related knowledge is modeled using an application ontology coupled with UML models that allow controlling data completeness and consistency, and domain knowledge is described by a domain ontology, which ensures data coherence. A system build with the eClims framework has been implemented and evaluated in the context of a proteomic platform.

  19. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    NASA Astrophysics Data System (ADS)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  20. Review of Variable Generation Integration Charges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, K.; Fink, S.; Buckley, M.

    2013-03-01

    The growth of wind and solar generation in the United States, and the expectation of continued growth of these technologies, dictates that the future power system will be operated in a somewhat different manner because of increased variability and uncertainty. A small number of balancing authorities have attempted to determine an 'integration cost' to account for these changes to their current operating practices. Some balancing authorities directly charge wind and solar generators for integration charges, whereas others add integration charges to projected costs of wind and solar in integrated resource plans or in competitive solicitations for generation. This report reviewsmore » the balancing authorities that have calculated variable generation integration charges and broadly compares and contrasts the methodologies they used to determine their specific integration charges. The report also profiles each balancing authority and how they derived wind and solar integration charges.« less

  1. Lagged correlations between the NAO and the 11-year solar cycle: forced response or internal variability?

    NASA Astrophysics Data System (ADS)

    Oehrlein, J.; Chiodo, G.; Polvani, L. M.; Smith, A. K.

    2017-12-01

    Recently, the North Atlantic Oscillation has been suggested to respond to the 11-year solar cycle with a lag of a few years. The solar/NAO relationship provides a potential pathway for solar activity to modulate surface climate. However, a short observational record paired with the strong internal variability of the NAO raises questions about the robustness of the claimed solar/NAO relationship. For the first time, we investigate the robustness of the solar/NAO signal in four different reanalysis data sets and long integrations from an ocean-coupled chemistry-climate model forced with the 11-year solar cycle. The signal appears to be robust in the different reanalysis datasets. We also show, for the first time, that many features of the observed signal, such as amplitude, spatial pattern, and lag of 2/3 years, can be accurately reproduced in our model simulations. However, in both the reanalysis and model simulations, we find that this signal is non-stationary. A lagged NAO/solar signal can also be reproduced in two sets of model integrations without the 11-year solar cycle. This suggests that the correlation found in observational data could be the result of internal decadal variability in the NAO and not a response to the solar cycle. This has wide implications towards the interpretation of solar signals in observational data.

  2. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  3. Engineering challenges of BioNEMS: the integration of microfluidics, micro- and nanodevices, models and external control for systems biology.

    PubMed

    Wikswo, J P; Prokop, A; Baudenbacher, F; Cliffel, D; Csukas, B; Velkovsky, M

    2006-08-01

    Systems biology, i.e. quantitative, postgenomic, postproteomic, dynamic, multiscale physiology, addresses in an integrative, quantitative manner the shockwave of genetic and proteomic information using computer models that may eventually have 10(6) dynamic variables with non-linear interactions. Historically, single biological measurements are made over minutes, suggesting the challenge of specifying 10(6) model parameters. Except for fluorescence and micro-electrode recordings, most cellular measurements have inadequate bandwidth to discern the time course of critical intracellular biochemical events. Micro-array expression profiles of thousands of genes cannot determine quantitative dynamic cellular signalling and metabolic variables. Major gaps must be bridged between the computational vision and experimental reality. The analysis of cellular signalling dynamics and control requires, first, micro- and nano-instruments that measure simultaneously multiple extracellular and intracellular variables with sufficient bandwidth; secondly, the ability to open existing internal control and signalling loops; thirdly, external BioMEMS micro-actuators that provide high bandwidth feedback and externally addressable intracellular nano-actuators; and, fourthly, real-time, closed-loop, single-cell control algorithms. The unravelling of the nested and coupled nature of cellular control loops requires simultaneous recording of multiple single-cell signatures. Externally controlled nano-actuators, needed to effect changes in the biochemical, mechanical and electrical environment both outside and inside the cell, will provide a major impetus for nanoscience.

  4. Integration of GIS, Geostatistics, and 3-D Technology to Assess the Spatial Distribution of Soil Moisture

    NASA Technical Reports Server (NTRS)

    Betts, M.; Tsegaye, T.; Tadesse, W.; Coleman, T. L.; Fahsi, A.

    1998-01-01

    The spatial and temporal distribution of near surface soil moisture is of fundamental importance to many physical, biological, biogeochemical, and hydrological processes. However, knowledge of these space-time dynamics and the processes which control them remains unclear. The integration of geographic information systems (GIS) and geostatistics together promise a simple mechanism to evaluate and display the spatial and temporal distribution of this vital hydrologic and physical variable. Therefore, this research demonstrates the use of geostatistics and GIS to predict and display soil moisture distribution under vegetated and non-vegetated plots. The research was conducted at the Winfred Thomas Agricultural Experiment Station (WTAES), Hazel Green, Alabama. Soil moisture measurement were done on a 10 by 10 m grid from tall fescue grass (GR), alfalfa (AA), bare rough (BR), and bare smooth (BS) plots. Results indicated that variance associated with soil moisture was higher for vegetated plots than non-vegetated plots. The presence of vegetation in general contributed to the spatial variability of soil moisture. Integration of geostatistics and GIS can improve the productivity of farm lands and the precision of farming.

  5. Humidity profiles over the ocean

    NASA Technical Reports Server (NTRS)

    Liu, W. T.; Tang, Wenqing; Niiler, Pearn P.

    1991-01-01

    The variabilities of atmospheric humidity profile over oceans from daily to interannual time scales were examined using 9 years of daily and semidaily radiosonde soundings at island stations extending from the Arctic to the South Pacific. The relative humidity profiles were found to have considerable temporal and geographic variabilities, contrary to the prevalent assumption. Principal component analysis on the profiles of specific humidity were used to examine the applicability of a relation between the surface-level humidity and the integrated water vapor; this relation has been used to estimate large-scale evaporation from satellite data. The first principal component was found to correlate almost perfectly with the integrated water vapor. The fractional variance represented by this mode increases with increasing period. It reaches approximately 90 percent at two weeks and decreases sharply, below one week, down to approximately 60 percent at the daily period. At low frequencies, the integrated water vapor appeared to be an adequate estimator of the humidity profile and the surface-level humidity. At periods shorter than a week, more than one independent estimator is needed.

  6. Packetized video on MAGNET

    NASA Astrophysics Data System (ADS)

    Lazar, Aurel A.; White, John S.

    1986-11-01

    Theoretical analysis of an ILAN model of MAGNET, an integrated network testbed developed at Columbia University, shows that the bandwidth freed up by video and voice calls during periods of little movement in the images and silence periods in the speech signals could be utilized efficiently for graphics and data transmission. Based on these investigations, an architecture supporting adaptive protocols that are dynamically controlled by the requirements of a fluctuating load and changing user environment has been advanced. To further analyze the behavior of the network, a real-time packetized video system has been implemented. This system is embedded in the real time multimedia workstation EDDY that integrates video, voice and data traffic flows. Protocols supporting variable bandwidth, constant quality packetized video transport are descibed in detail.

  7. [Impact of acquired brain injury towards the community integration: employment outcome, disability and dependence two years after injury].

    PubMed

    Luna-Lario, P; Ojeda, N; Tirapu-Ustarroz, J; Pena, J

    2016-06-16

    To analyze the impact of acquired brain injury towards the community integration (professional career, disability, and dependence) in a sample of people affected by vascular, traumatic and tumor etiology acquired brain damage, over a two year time period after the original injury, and also to examine what sociodemographic variables, premorbid and injury related clinical data can predict the level of the person's integration into the community. 106 adults sample suffering from acquired brain injury who were attended by the Neuropsychology and Neuropsychiatry Department at Hospital of Navarra (Spain) affected by memory deficit as their main sequel. Differences among groups have been analyzed by using t by Student, chi squared and U by Mann-Whitney tests. 19% and 29% of the participants who were actively working before the injury got back their previous status within one and two years time respectively. 45% of the total sample were recognized disabled and 17% dependant. No relationship between sociodemographic and clinical variables and functional parameters observed were found. Acquired brain damage presents a high intensity impact on affected person's life trajectory. Nevertheless, in Spain, its consequences at sociolaboral adjustment over the the two years following the damage through functional parameters analyzed with official governmental means over a vascular, traumatic and tumor etiology sample had never been studied before.

  8. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2015-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  9. Computational Fluid Dynamics Modeling of a Supersonic Nozzle and Integration into a Variable Cycle Engine Model

    NASA Technical Reports Server (NTRS)

    Connolly, Joseph W.; Friedlander, David; Kopasakis, George

    2014-01-01

    This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.

  10. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  11. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  12. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2006-11-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  13. Functional Integration

    NASA Astrophysics Data System (ADS)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2010-06-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  14. Nash and integrated solutions in a just-in-time seller-buyer supply chain with buyer's ordering cost reductions

    NASA Astrophysics Data System (ADS)

    Lou, Kuo-Ren; Wang, Lu

    2016-05-01

    The seller frequently offers the buyer trade credit to settle the purchase amount. From the seller's prospective, granting trade credit increases not only the opportunity cost (i.e., the interest loss on the buyer's purchase amount during the credit period) but also the default risk (i.e., the rate that the buyer will be unable to pay off his/her debt obligations). On the other hand, granting trade credit increases sales volume and revenue. Consequently, trade credit is an important strategy to increase seller's profitability. In this paper, we assume that the seller uses trade credit and number of shipments in a production run as decision variables to maximise his/her profit, while the buyer determines his/her replenishment cycle time and capital investment as decision variables to reduce his/her ordering cost and achieve his/her maximum profit. We then derive non-cooperative Nash solution and cooperative integrated solution in a just-in-time inventory system, in which granting trade credit increases not only the demand but also the opportunity cost and default risk, and the relationship between the capital investment and the ordering cost reduction is logarithmic. Then, we use a software to solve and compare these two distinct solutions. Finally, we use sensitivity analysis to obtain some managerial insights.

  15. Integrating Variable Renewable Energy - Russia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability and reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can bemore » organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, Planning for a High RE Future. This is a Russian-language translation of Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid, originally published in English in May 2015.« less

  16. Integrating Variable Renewable Energy into the Grid: Key Issues, Greening the Grid (Spanish Version)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    This is the Spanish version of 'Greening the Grid - Integrating Variable Renewable Energy into the Grid: Key Issues'. To foster sustainable, low-emission development, many countries are establishing ambitious renewable energy targets for their electricity supply. Because solar and wind tend to be more variable and uncertain than conventional sources, meeting these targets will involve changes to power system planning and operations. Grid integration is the practice of developing efficient ways to deliver variable renewable energy (VRE) to the grid. Good integration methods maximize the cost-effectiveness of incorporating VRE into the power system while maintaining or increasing system stability andmore » reliability. When considering grid integration, policy makers, regulators, and system operators consider a variety of issues, which can be organized into four broad topics: New Renewable Energy Generation, New Transmission, Increased System Flexibility, and Planning for a High RE Future.« less

  17. A High Precision Prediction Model Using Hybrid Grey Dynamic Model

    ERIC Educational Resources Information Center

    Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro

    2008-01-01

    In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…

  18. An Integrated Theory of Attention and Decision Making in Visual Signal Detection

    ERIC Educational Resources Information Center

    Smith, Philip L.; Ratcliff, Roger

    2009-01-01

    The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in…

  19. A Rotating Space Interferometer with Variable Baselines and Low Power Consumption

    NASA Technical Reports Server (NTRS)

    Gezari, Daniel Y.

    1999-01-01

    A new concept is presented here for a large, rotating space interferometer which would achieve full u, v plane coverage with reasonably uniform integration times, yet once set in motion no additional energy would be required to change collector separations, maintain constant baseline rotation rates, or to counteract centrifugal forces on the collectors.

  20. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  1. Integrative Motivation: Changes during a Year-Long Intermediate-Level Language Course

    ERIC Educational Resources Information Center

    Gardner, R. C.; Masgoret, A. M.; Tennant, J.; Mihic, L.

    2004-01-01

    The socioeducational model of second language acquisition postulates that language learning is a dynamic process in which affective variables influence language achievement and achievement and experiences in language learning can influence some affective variables. Five classes of variable are emphasized: integrativeness, attitudes toward the…

  2. Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model

    PubMed Central

    Teka, Wondimu; Marinov, Toma M.; Santamaria, Fidel

    2014-01-01

    The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation. PMID:24675903

  3. Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016.

    PubMed

    Wohland, Jan; Reyers, Mark; Märker, Carolin; Witthaut, Dirk

    2018-01-01

    Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability.

  4. Seamless variation of isometric and anisometric dynamical integrity measures in basins's erosion

    NASA Astrophysics Data System (ADS)

    Belardinelli, P.; Lenci, S.; Rega, G.

    2018-03-01

    Anisometric integrity measures defined as improvement and generalization of two existing measures (LIM, local integrity measure, and IF, integrity factor) of the extent and compactness of basins of attraction are introduced. Non-equidistant measures make it possible to account for inhomogeneous sensitivities of the state space variables to perturbations, thus permitting a more confident and targeted identification of the safe regions. All four measures are used for a global dynamics analysis of the twin-well Duffing oscillator, which is performed by considering a nearly continuous variation of a governing control parameter, thanks to the use of parallel computation allowing reasonable CPU time. This improves literature results based on finite (and commonly large) variations of the parameter, due to computational constraints. The seamless evolution of key integrity measures highlights the fine aspects of the erosion of the safe domain with respect to the increasing forcing amplitude.

  5. Packaging of Human Chromosome 19-Specific Adeno-Associated Virus (AAV) Integration Sites in AAV Virions during AAV Wild-Type and Recombinant AAV Vector Production

    PubMed Central

    Hüser, Daniela; Weger, Stefan; Heilbronn, Regine

    2003-01-01

    Adeno-associated virus type 2 (AAV-2) establishes latency by site-specific integration into a unique locus on human chromosome 19, called AAVS1. During the development of a sensitive real-time PCR assay for site-specific integration, AAV-AAVS1 junctions were reproducibly detected in highly purified AAV wild-type and recombinant AAV vector stocks. A series of controls documented that the junctions were packaged in AAV capsids and were newly generated during a single round of AAV production. Cloned junctions displayed variable AAV sequences fused to AAVS1. These data suggest that packaged junctions represent footprints of AAV integration during productive infection. Apparently, AAV latency established by site-specific integration and the helper virus-dependent, productive AAV cycle are more closely related than previously thought. PMID:12663794

  6. Social vulnerability and climate variability in southern Brazil: a TerraPop case study

    NASA Astrophysics Data System (ADS)

    Adamo, S. B.; Fitch, C. A.; Kugler, T.; Doxsey-Whitfield, E.

    2014-12-01

    Climate variability is an inherent characteristic of the Earth's climate, including but not limited to climate change. It affects and impacts human society in different ways, depending on the underlying socioeconomic vulnerability of specific places, social groups, households and individuals. This differential vulnerability presents spatial and temporal variations, and is rooted in historical patterns of development and relations between human and ecological systems. This study aims to assess the impact of climate variability on livelihoods and well-being, as well as their changes over time and across space, and for rural and urban populations. The geographic focus is Southern Brazil-the states of Parana, Santa Catarina and Rio Grande do Sul-- and the objectives include (a) to identify and map critical areas or hotspots of exposure to climate variability (temperature and precipitation), and (b) to identify internal variation or differential vulnerability within these areas and its evolution over time (1980-2010), using newly available integrated data from the Terra Populus project. These data include geo-referenced climate and agricultural data, and data describing demographic and socioeconomic characteristics of individuals, households and places.

  7. Data driven analysis of rain events: feature extraction, clustering, microphysical /macro physical relationship

    NASA Astrophysics Data System (ADS)

    Djallel Dilmi, Mohamed; Mallet, Cécile; Barthes, Laurent; Chazottes, Aymeric

    2017-04-01

    The study of rain time series records is mainly carried out using rainfall rate or rain accumulation parameters estimated on a fixed integration time (typically 1 min, 1 hour or 1 day). In this study we used the concept of rain event. In fact, the discrete and intermittent natures of rain processes make the definition of some features inadequate when defined on a fixed duration. Long integration times (hour, day) lead to mix rainy and clear air periods in the same sample. Small integration time (seconds, minutes) will lead to noisy data with a great sensibility to detector characteristics. The analysis on the whole rain event instead of individual short duration samples of a fixed duration allows to clarify relationships between features, in particular between macro physical and microphysical ones. This approach allows suppressing the intra-event variability partly due to measurement uncertainties and allows focusing on physical processes. An algorithm based on Genetic Algorithm (GA) and Self Organising Maps (SOM) is developed to obtain a parsimonious characterisation of rain events using a minimal set of variables. The use of self-organizing map (SOM) is justified by the fact that it allows to map a high dimensional data space in a two-dimensional space while preserving as much as possible the initial space topology in an unsupervised way. The obtained SOM allows providing the dependencies between variables and consequently removing redundant variables leading to a minimal subset of only five features (the event duration, the rain rate peak, the rain event depth, the event rain rate standard deviation and the absolute rain rate variation of order 0.5). To confirm relevance of the five selected features the corresponding SOM is analyzed. This analysis shows clearly the existence of relationships between features. It also shows the independence of the inter-event time (IETp) feature or the weak dependence of the Dry percentage in event (Dd%e) feature. This confirms that a rain time series can be considered by an alternation of independent rain event and no rain period. The five selected feature are used to perform a hierarchical clustering of the events. The well-known division between stratiform and convective events appears clearly. This classification into two classes is then refined in 5 fairly homogeneous subclasses. The data driven analysis performed on whole rain events instead of fixed length samples allows identifying strong relationships between macrophysics (based on rain rate) and microphysics (based on raindrops) features. We show that among the 5 identified subclasses some of them have specific microphysics characteristics. Obtaining information on microphysical characteristics of rainfall events from rain gauges measurement suggests many implications in development of the quantitative precipitation estimation (QPE), for the improvement of rain rate retrieval algorithm in remote sensing context.

  8. Predicting phenology by integrating ecology, evolution and climate science

    USGS Publications Warehouse

    Pau, Stephanie; Wolkovich, Elizabeth M.; Cook, Benjamin I.; Davies, T. Jonathan; Kraft, Nathan J.B.; Bolmgren, Kjell; Betancourt, Julio L.; Cleland, Elsa E.

    2011-01-01

    Forecasting how species and ecosystems will respond to climate change has been a major aim of ecology in recent years. Much of this research has focused on phenology — the timing of life-history events. Phenology has well-demonstrated links to climate, from genetic to landscape scales; yet our ability to explain and predict variation in phenology across species, habitats and time remains poor. Here, we outline how merging approaches from ecology, climate science and evolutionary biology can advance research on phenological responses to climate variability. Using insight into seasonal and interannual climate variability combined with niche theory and community phylogenetics, we develop a predictive approach for species' reponses to changing climate. Our approach predicts that species occupying higher latitudes or the early growing season should be most sensitive to climate and have the most phylogenetically conserved phenologies. We further predict that temperate species will respond to climate change by shifting in time, while tropical species will respond by shifting space, or by evolving. Although we focus here on plant phenology, our approach is broadly applicable to ecological research of plant responses to climate variability.

  9. Shuttle program: Ground tracking data program document shuttle OFT launch/landing

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1977-01-01

    The equations for processing ground tracking data during a space shuttle ascent or entry, or any nonfree flight phase of a shuttle mission are given. The resulting computer program processes data from up to three stations simultaneously: C-band station number 1; C-band station number 2; and an S-band station. The C-band data consists of range, azimuth, and elevation angle measurements. The S-band data consists of range, two angles, and integrated Doppler data in the form of cycle counts. A nineteen element state vector is used in Kalman filter to process the measurements. The acceleration components of the shuttle are taken to be independent exponentially-correlated random variables. Nine elements of the state vector are the measurement bias errors associated with range and two angles for each tracking station. The biases are all modeled as exponentially-correlated random variables with a typical time constant of 108 seconds. All time constants are taken to be the same for all nine state variables. This simplifies the logic in propagating the state error covariance matrix ahead in time.

  10. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  11. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  12. Beyond these walls: Can psychosocial clubhouses promote the social integration of adults with serious mental illness in the community?

    PubMed

    Gumber, Shinakee; Stein, Catherine H

    2018-03-01

    The study examined factors associated with community integration experiences of adults with serious mental illness who were members of psychosocial rehabilitation clubhouses in New York City. Ninety-two clubhouse members completed an online survey. The study examined relative contribution of adults' reports of individual factors (self-reported psychiatric symptoms, self-esteem), community supports (self-reported employment status and perceived family support), and the clubhouse environment (self-reported time spent in the clubhouse, clubhouse supportiveness, and practical orientation) in accounting for variation in members' reports of social integration within the clubhouse and within the larger community. Hierarchical linear regression results suggest a differential pattern of variables associated with participants' experience of social integration within the clubhouse versus outside the clubhouse with the larger non-mental-health consumers. Adults' reports of more time spent in the clubhouse and perceptions of clubhouse environment as having a more practical orientation were associated with adults' reports of greater social integration within the clubhouse. In contrast, greater self-esteem and being independently employed were associated with greater social integration outside the clubhouse. Perceived family support was associated with higher levels of social integration both within and outside the clubhouse setting. Conclusion and Implication for Practice: Greater social integration of clubhouse members both in and outside the clubhouse environment is essential in understanding community integration. Recommendations for the clubhouse model to improve community integration experiences of its members are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Some new retarded nonlinear Volterra-Fredholm type integral inequalities with maxima in two variables and their applications.

    PubMed

    Xu, Run; Ma, Xiangting

    2017-01-01

    In this paper, we establish some new retarded nonlinear Volterra-Fredholm type integral inequalities with maxima in two independent variables, and we present the applications to research the boundedness of solutions to retarded nonlinear Volterra-Fredholm type integral equations.

  14. Determining Directional Dependency in Causal Associations

    PubMed Central

    Pornprasertmanit, Sunthud; Little, Todd D.

    2014-01-01

    Directional dependency is a method to determine the likely causal direction of effect between two variables. This article aims to critique and improve upon the use of directional dependency as a technique to infer causal associations. We comment on several issues raised by von Eye and DeShon (2012), including: encouraging the use of the signs of skewness and excessive kurtosis of both variables, discouraging the use of D’Agostino’s K2, and encouraging the use of directional dependency to compare variables only within time points. We offer improved steps for determining directional dependency that fix the problems we note. Next, we discuss how to integrate directional dependency into longitudinal data analysis with two variables. We also examine the accuracy of directional dependency evaluations when several regression assumptions are violated. Directional dependency can suggest the direction of a relation if (a) the regression error in population is normal, (b) an unobserved explanatory variable correlates with any variables equal to or less than .2, (c) a curvilinear relation between both variables is not strong (standardized regression coefficient ≤ .2), (d) there are no bivariate outliers, and (e) both variables are continuous. PMID:24683282

  15. Embroidered Electromyography: A Systematic Design Guide.

    PubMed

    Shafti, Ali; Ribas Manero, Roger B; Borg, Amanda M; Althoefer, Kaspar; Howard, Matthew J

    2017-09-01

    Muscle activity monitoring or electromyography (EMG) is a useful tool. However, EMG is typically invasive, expensive and difficult to use for untrained users. A possible solution is textile-based surface EMG (sEMG) integrated into clothing as a wearable device. This is, however, challenging due to 1) uncertainties in the electrical properties of conductive threads used for electrodes, 2) imprecise fabrication technologies (e.g., embroidery, sewing), and 3) lack of standardization in design variable selection. This paper, for the first time, provides a design guide for such sensors by performing a thorough examination of the effect of design variables on sEMG signal quality. Results show that imprecisions in digital embroidery lead to a trade-off between low electrode impedance and high manufacturing consistency. An optimum set of variables for this trade-off is identified and tested with sEMG during a variable force isometric grip exercise with n = 12 participants, compared with conventional gel-based electrodes. Results show that thread-based electrodes provide a similar level of sensitivity to force variation as gel-based electrodes with about 90% correlation to expected linear behavior. As proof of concept, jogging leggings with integrated embroidered sEMG are made and successfully tested for detection of muscle fatigue while running on different surfaces.

  16. Contact Hamiltonian systems and complete integrability

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    2017-12-01

    We summarize recent results on the integrability of Hamiltonian systems on contact manifolds. We explain how to extend the classical formulation of action-angle variables to contact integrable systems. Using the Jacobi brackets defined on contact manifolds, we discuss the commutativity of first integrals for contact Hamiltonian systems and present the construction of generalized contact action-angle variables. We illustrate the integrability in the contact geometry on the five-dimensional Sasaki-Einstein spaces T1,1 and Yp,q.

  17. Numerical experiments on short-term meteorological effects on solar variability

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.; Hansen, J. E.; Stone, P. H.; Quirk, W. J.; Lacis, A. A.

    1975-01-01

    A set of numerical experiments was conducted to test the short-range sensitivity of a large atmospheric general circulation model to changes in solar constant and ozone amount. On the basis of the results of 12-day sets of integrations with very large variations in these parameters, it is concluded that realistic variations would produce insignificant meteorological effects. Any causal relationships between solar variability and weather, for time scales of two weeks or less, rely upon changes in parameters other than solar constant or ozone amounts, or upon mechanisms not yet incorporated in the model.

  18. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  19. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  20. A Potential Function Derivation of a Constitutive Equation for Inelastic Material Response

    NASA Technical Reports Server (NTRS)

    Stouffer, D. C.; Elfoutouh, N. A.

    1983-01-01

    Physical and thermodynamic concepts are used to develop a potential function for application to high temperature polycrystalline material response. Inherent in the formulation is a differential relationship between the potential function and constitutive equation in terms of the state variables. Integration of the differential relationship produces a state variable evolution equation that requires specification of the initial value of the state variable and its time derivative. It is shown that the initial loading rate, which is directly related to the initial hardening rate, can significantly influence subsequent material response. This effect is consistent with observed material behavior on the macroscopic and microscopic levels, and may explain the wide scatter in response often found in creep testing.

  1. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  2. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  3. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  4. Using New Theory and Experimental Methods to Understand the Relative Controls of Storage, Antecedent Conditions and Precipitation Intensity on Transit Time Distributions through a Sloping Soil Lysimeter

    NASA Astrophysics Data System (ADS)

    Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.

    2014-12-01

    Transit time distributions (TTD) are an efficient way of characterizing transport through the complex flow dynamics of a hydrologic system, and can serve as a basis for spatially-integrated solute transport modeling. Recently there has been progress in the development of a theory of time-variable TTDs that captures the effect of temporal variability in the timing of fluxes as well as changes in flow pathways. Furthermore, a new formulation of this theory allows the essential transport properties of a system to be parameterized by a physically meaningful time-variable probability distribution, the Ω function. This distribution determines how the age distribution of water in storage is sampled by the outflow. The form of the Ω function varies if the flow pathways change, but is not determined by the timing of fluxes (unlike the TTD). In this study, we use this theory to characterize transport by transient flows through a homogeneously packed 1 m3 sloping soil lysimeter. The transit time distribution associated with each of four irrigation periods (repeated daily for 24 days) are compared to examine the significance of changes in the Ω function due to variations in total storage, antecedent conditions, and precipitation intensity. We observe both the time-variable TTD and the Ω function experimentally by applying the PERTH method (Harman and Kim, 2014, GRL, 41, 1567-1575). The method allows us to observe multiple overlapping time-variable TTD in controlled experiments using only two conservative tracers. We hypothesize that both the TTD and the Ω function will vary in time, even in this small scale, because water will take different flow pathways depending on the initial state of the lysimeter and irrigation intensity. However, based on primarily modeling, we conjecture that major variability in the Ω function will be limited to a period during and immediately after each irrigation. We anticipate the Ω function is almost time-invariant (or scales simply with total storage) during the recession period because flow pathways are stable during this period. This is one of the first experimental studies of this type, and the results offer insights into solute transport in transient, variably-saturated systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.; Harrison, D. E. Jr.

    A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).

  6. Heart Fibrillation and Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.

    1997-01-01

    The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.

  7. Integrating continuous stocks and flows into state-and-transition simulation models of landscape change

    USGS Publications Warehouse

    Daniel, Colin J.; Sleeter, Benjamin M.; Frid, Leonardo; Fortin, Marie-Josée

    2018-01-01

    State-and-transition simulation models (STSMs) provide a general framework for forecasting landscape dynamics, including projections of both vegetation and land-use/land-cover (LULC) change. The STSM method divides a landscape into spatially-referenced cells and then simulates the state of each cell forward in time, as a discrete-time stochastic process using a Monte Carlo approach, in response to any number of possible transitions. A current limitation of the STSM method, however, is that all of the state variables must be discrete.Here we present a new approach for extending a STSM, in order to account for continuous state variables, called a state-and-transition simulation model with stocks and flows (STSM-SF). The STSM-SF method allows for any number of continuous stocks to be defined for every spatial cell in the STSM, along with a suite of continuous flows specifying the rates at which stock levels change over time. The change in the level of each stock is then simulated forward in time, for each spatial cell, as a discrete-time stochastic process. The method differs from the traditional systems dynamics approach to stock-flow modelling in that the stocks and flows can be spatially-explicit, and the flows can be expressed as a function of the STSM states and transitions.We demonstrate the STSM-SF method by integrating a spatially-explicit carbon (C) budget model with a STSM of LULC change for the state of Hawai'i, USA. In this example, continuous stocks are pools of terrestrial C, while the flows are the possible fluxes of C between these pools. Importantly, several of these C fluxes are triggered by corresponding LULC transitions in the STSM. Model outputs include changes in the spatial and temporal distribution of C pools and fluxes across the landscape in response to projected future changes in LULC over the next 50 years.The new STSM-SF method allows both discrete and continuous state variables to be integrated into a STSM, including interactions between them. With the addition of stocks and flows, STSMs provide a conceptually simple yet powerful approach for characterizing uncertainties in projections of a wide range of questions regarding landscape change.

  8. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2014-03-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.

  9. The Dirac equation and the normalization of its solutions in a closed Friedmann- Robertson-Walker universe

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Reintjes, Moritz

    2009-05-01

    We set up the Dirac equation in a Friedmann-Robertson-Walker geometry and separate the spatial and time variables. In the case of a closed universe, the spatial dependence is solved explicitly, giving rise to a discrete set of solutions. We compute the probability integral and analyze a spacetime normalization integral. This analysis allows us to introduce the fermionic projector in a closed Friedmann-Robertson-Walker geometry and to specify its global normalization as well as its local form. First author supported in part by the Deutsche Forschungsgemeinschaft.

  10. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    NASA Astrophysics Data System (ADS)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half layer with the relaxation time of 5 yrs. In a test model where we set the fault at 30-40 km depths, the recurrence time of the EQ cycle is reduced by 1 yr from 27.92 in elastic case to 26.85 yrs. This smaller recurrence time is the same as in Kato (2002), but the effect of the viscoelasticity on the cycles would be larger in the dip-slip fault case than that in the strike-slip one.

  11. Long-term dynamic modeling of tethered spacecraft using nodal position finite element method and symplectic integration

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Zhu, Z. H.

    2015-12-01

    Dynamic modeling of tethered spacecraft with the consideration of elasticity of tether is prone to the numerical instability and error accumulation over long-term numerical integration. This paper addresses the challenges by proposing a globally stable numerical approach with the nodal position finite element method (NPFEM) and the implicit, symplectic, 2-stage and 4th order Gaussian-Legendre Runge-Kutta time integration. The NPFEM eliminates the numerical error accumulation by using the position instead of displacement of tether as the state variable, while the symplectic integration enforces the energy and momentum conservation of the discretized finite element model to ensure the global stability of numerical solution. The effectiveness and robustness of the proposed approach is assessed by an elastic pendulum problem, whose dynamic response resembles that of tethered spacecraft, in comparison with the commonly used time integrators such as the classical 4th order Runge-Kutta schemes and other families of non-symplectic Runge-Kutta schemes. Numerical results show that the proposed approach is accurate and the energy of the corresponding numerical model is conservative over the long-term numerical integration. Finally, the proposed approach is applied to the dynamic modeling of deorbiting process of tethered spacecraft over a long period.

  12. The Integration of Continuous and Discrete Latent Variable Models: Potential Problems and Promising Opportunities

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Curran, Patrick J.

    2004-01-01

    Structural equation mixture modeling (SEMM) integrates continuous and discrete latent variable models. Drawing on prior research on the relationships between continuous and discrete latent variable models, the authors identify 3 conditions that may lead to the estimation of spurious latent classes in SEMM: misspecification of the structural model,…

  13. Integrated control-structure design

    NASA Technical Reports Server (NTRS)

    Hunziker, K. Scott; Kraft, Raymond H.; Bossi, Joseph A.

    1991-01-01

    A new approach for the design and control of flexible space structures is described. The approach integrates the structure and controller design processes thereby providing extra opportunities for avoiding some of the disastrous effects of control-structures interaction and for discovering new, unexpected avenues of future structural design. A control formulation based on Boyd's implementation of Youla parameterization is employed. Control design parameters are coupled with structural design variables to produce a set of integrated-design variables which are selected through optimization-based methodology. A performance index reflecting spacecraft mission goals and constraints is formulated and optimized with respect to the integrated design variables. Initial studies have been concerned with achieving mission requirements with a lighter, more flexible space structure. Details of the formulation of the integrated-design approach are presented and results are given from a study involving the integrated redesign of a flexible geostationary platform.

  14. Direct power control of DFIG wind turbine systems based on an intelligent proportional-integral sliding mode control.

    PubMed

    Li, Shanzhi; Wang, Haoping; Tian, Yang; Aitouch, Abdel; Klein, John

    2016-09-01

    This paper presents an intelligent proportional-integral sliding mode control (iPISMC) for direct power control of variable speed-constant frequency wind turbine system. This approach deals with optimal power production (in the maximum power point tracking sense) under several disturbance factors such as turbulent wind. This controller is made of two sub-components: (i) an intelligent proportional-integral module for online disturbance compensation and (ii) a sliding mode module for circumventing disturbance estimation errors. This iPISMC method has been tested on FAST/Simulink platform of a 5MW wind turbine system. The obtained results demonstrate that the proposed iPISMC method outperforms the classical PI and intelligent proportional-integral control (iPI) in terms of both active power and response time. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Bacterial community variation in human body habitats across space and time.

    PubMed

    Costello, Elizabeth K; Lauber, Christian L; Hamady, Micah; Fierer, Noah; Gordon, Jeffrey I; Knight, Rob

    2009-12-18

    Elucidating the biogeography of bacterial communities on the human body is critical for establishing healthy baselines from which to detect differences associated with diseases. To obtain an integrated view of the spatial and temporal distribution of the human microbiota, we surveyed bacteria from up to 27 sites in seven to nine healthy adults on four occasions. We found that community composition was determined primarily by body habitat. Within habitats, interpersonal variability was high, whereas individuals exhibited minimal temporal variability. Several skin locations harbored more diverse communities than the gut and mouth, and skin locations differed in their community assembly patterns. These results indicate that our microbiota, although personalized, varies systematically across body habitats and time; such trends may ultimately reveal how microbiome changes cause or prevent disease.

  16. Harvesting model uncertainty for the simulation of interannual variability

    NASA Astrophysics Data System (ADS)

    Misra, Vasubandhu

    2009-08-01

    An innovative modeling strategy is introduced to account for uncertainty in the convective parameterization (CP) scheme of a coupled ocean-atmosphere model. The methodology involves calling the CP scheme several times at every given time step of the model integration to pick the most probable convective state. Each call of the CP scheme is unique in that one of its critical parameter values (which is unobserved but required by the scheme) is chosen randomly over a given range. This methodology is tested with the relaxed Arakawa-Schubert CP scheme in the Center for Ocean-Land-Atmosphere Studies (COLA) coupled general circulation model (CGCM). Relative to the control COLA CGCM, this methodology shows improvement in the El Niño-Southern Oscillation simulation and the Indian summer monsoon precipitation variability.

  17. Bacterial Community Variation in Human Body Habitats Across Space and Time

    PubMed Central

    Costello, Elizabeth K.; Lauber, Christian L.; Hamady, Micah; Fierer, Noah; Gordon, Jeffrey I.; Knight, Rob

    2010-01-01

    Elucidating the biogeography of bacterial communities on the human body is critical for establishing healthy baselines from which to detect differences associated with diseases. To obtain an integrated view of the spatial and temporal distribution of the human microbiota, we surveyed bacteria from up to 27 sites in 7–9 healthy adults on four occasions. We found that community composition was determined primarily by body habitat. Within habitats, interpersonal variability was high, while individuals exhibited minimal temporal variability. Several skin locations harbored more diverse communities than the gut and mouth, and skin locations differed in their community assembly patterns. These results indicate that our microbiota, although personalized, varies systematically across body habitats and time: such trends may ultimately reveal how microbiome changes cause or prevent disease. PMID:19892944

  18. Efficient Trajectory Propagation for Orbit Determination Problems

    NASA Technical Reports Server (NTRS)

    Roa, Javier; Pelaez, Jesus

    2015-01-01

    Regularized formulations of orbital motion apply a series of techniques to improve the numerical integration of the orbit. Despite their advantages and potential applications little attention has been paid to the propagation of the partial derivatives of the corresponding set of elements or coordinates, required in many orbit-determination scenarios and optimization problems. This paper fills this gap by presenting the general procedure for integrating the state-transition matrix of the system together with the nominal trajectory using regularized formulations and different sets of elements. The main difficulty comes from introducing an independent variable different from time, because the solution needs to be synchronized. The correction of the time delay is treated from a generic perspective not focused on any particular formulation. The synchronization using time-elements is also discussed. Numerical examples include strongly-perturbed orbits in the Pluto system, motivated by the recent flyby of the New Horizons spacecraft, together with a geocentric flyby of the NEAR spacecraft.

  19. SURGNET: An Integrated Surgical Data Transmission System for Telesurgery.

    PubMed

    Natarajan, Sriram; Ganz, Aura

    2009-01-01

    Remote surgery information requires quick and reliable transmission between the surgeon and the patient site. However, the networks that interconnect the surgeon and patient sites are usually time varying and lossy which can cause packet loss and delay jitter. In this paper we propose SURGNET, a telesurgery system for which we developed the architecture, algorithms and implemented it on a testbed. The algorithms include adaptive packet prediction and buffer time adjustment techniques which reduce the negative effects caused by the lossy and time varying networks. To evaluate the proposed SURGNET system, at the therapist site, we implemented a therapist panel which controls the force feedback device movements and provides image analysis functionality. At the patient site we controlled a virtual reality applet built in Matlab. The varying network conditions were emulated using NISTNet emulator. Our results show that even for severe packet loss and variable delay jitter, the proposed integrated synchronization techniques significantly improve SURGNET performance.

  20. Investigating Variables Predicting Turkish Pre-service Teachers' Integration of ICT into Teaching Practices

    ERIC Educational Resources Information Center

    Aslan, Aydin; Zhu, Chang

    2017-01-01

    Pre-service teachers need to acquire information and communications technology (ICT) competency in order to integrate ICT into their teaching practices. This research was conducted to investigate to what extent ICT-related variables--such as perceived ICT competence, perceived competence in ICT integration, attitudes toward ICT, anxiety around ICT…

  1. 78 FR 72878 - Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-04

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM10-11-000] Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764 Electronic Compliance Filings Take notice of the following filing procedures with respect to compliance obligations in Integration of...

  2. Visual Analytics of integrated Data Systems for Space Weather Purposes

    NASA Astrophysics Data System (ADS)

    Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo

    Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.

  3. Effect of Computer-Delivered Testing on Achievement in a Mastery Learning Course of Study with Partial Scoring and Variable Pacing.

    ERIC Educational Resources Information Center

    Evans, Richard M.; Surkan, Alvin J.

    The recent arrival of portable computer systems with high-level language interpreters now makes it practical to rapidly develop complex testing and scoring programs. These programs permit undergraduates access, at arbitrary times, to testing as an integral part of a mastery learning strategy. Effects of introducing the computer were studied by…

  4. Automated Welding System

    NASA Technical Reports Server (NTRS)

    Bayless, E. O.; Lawless, K. G.; Kurgan, C.; Nunes, A. C.; Graham, B. F.; Hoffman, D.; Jones, C. S.; Shepard, R.

    1993-01-01

    Fully automated variable-polarity plasma arc VPPA welding system developed at Marshall Space Flight Center. System eliminates defects caused by human error. Integrates many sensors with mathematical model of the weld and computer-controlled welding equipment. Sensors provide real-time information on geometry of weld bead, location of weld joint, and wire-feed entry. Mathematical model relates geometry of weld to critical parameters of welding process.

  5. Ocean Variability Effects on Underwater Acoustic Communications

    DTIC Science & Technology

    2011-09-01

    schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with

  6. The Rhizosphere Bacterial Microbiota of Vitis vinifera cv. Pinot Noir in an Integrated Pest Management Vineyard.

    PubMed

    Novello, Giorgia; Gamalero, Elisa; Bona, Elisa; Boatti, Lara; Mignone, Flavio; Massa, Nadia; Cesaro, Patrizia; Lingua, Guido; Berta, Graziella

    2017-01-01

    Microorganisms associated with Vitis vinifera (grapevine) can affect its growth, health and grape quality. The aim of this study was to unravel the biodiversity of the bacterial rhizosphere microbiota of grapevine in an integrated pest management vineyard located in Piedmont, Italy. Comparison between the microbial community structure in the bulk and rhizosphere soil (variable: space) were performed. Moreover, the possible shifts of the bulk and rhizosphere soil microbiota according to two phenological stages such as flowering and early fruit development (variable: time) were characterized. The grapevine microbiota was identified using metagenomics and next-generation sequencing. Biodiversity was higher in the rhizosphere than in the bulk soil, independent of the phenological stage. Actinobacteria were the dominant class with frequencies ≥ 50% in all the soil samples, followed by Proteobacteria, Gemmatimonadetes, and Bacteroidetes. While Actinobacteria and Proteobacteria are well-known as being dominant in soil, this is the first time the presence of Gemmatimonadetes has been observed in vineyard soils. Gaiella was the dominant genus of Actinobacteria in all the samples. Finally, the microbiota associated with grapevine differed from the bulk soil microbiota and these variations were independent of the phenological stage of the plant.

  7. An improved genetic algorithm for multidimensional optimization of precedence-constrained production planning and scheduling

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-06-01

    Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.

  8. Predicting Seagrass Occurrence in a Changing Climate Using Random Forests

    NASA Astrophysics Data System (ADS)

    Aydin, O.; Butler, K. A.

    2017-12-01

    Seagrasses are marine plants that can quickly sequester vast amounts of carbon (up to 100 times more and 12 times faster than tropical forests). In this work, we present an integrated GIS and machine learning approach to build a data-driven model of seagrass presence-absence. We outline a random forest approach that avoids the prevalence bias in many ecological presence-absence models. One of our goals is to predict global seagrass occurrence from a spatially limited training sample. In addition, we conduct a sensitivity study which investigates the vulnerability of seagrass to changing climate conditions. We integrate multiple data sources including fine-scale seagrass data from MarineCadastre.gov and the recently available globally extensive publicly available Ecological Marine Units (EMU) dataset. These data are used to train a model for seagrass occurrence along the U.S. coast. In situ oceans data are interpolated using Empirical Bayesian Kriging (EBK) to produce globally extensive prediction variables. A neural network is used to estimate probable future values of prediction variables such as ocean temperature to assess the impact of a warming climate on seagrass occurrence. The proposed workflow can be generalized to many presence-absence models.

  9. Data for first NASA Atmospheric Variability Experiment (AVE 1). Part 1: Data tabulation. [rawindsonde data for eastern United States

    NASA Technical Reports Server (NTRS)

    Scoggins, J. R.; Smith, O. E.

    1973-01-01

    A tablulation is given of rawinsonde data for NASA's first Atmospheric Variability Experiment (AVE 1) conducted during the period February 19-22, 1964. Methods of data handling and processing, and estimates of error magnitudes are also given. Data taken on the AVE 1 project in 1964 enabled an analysis of a large sector of the eastern United States on a fine resolution time scale. This experiment was run in February 1964, and data were collected as a wave developed in the East Gulf on a frontal system which extended through the eastern part of the United States. The primary objective of AVE 1 was to investigate the variability of parameters in space and over time intervals of three hours, and to integrate the results into NASA programs which require this type of information. The results presented are those from one approach, and represent only a portion of the total research effort that can be accomplished.

  10. A Unified Mathematical Framework for Coding Time, Space, and Sequences in the Hippocampal Region

    PubMed Central

    MacDonald, Christopher J.; Tiganj, Zoran; Shankar, Karthik H.; Du, Qian; Hasselmo, Michael E.; Eichenbaum, Howard

    2014-01-01

    The medial temporal lobe (MTL) is believed to support episodic memory, vivid recollection of a specific event situated in a particular place at a particular time. There is ample neurophysiological evidence that the MTL computes location in allocentric space and more recent evidence that the MTL also codes for time. Space and time represent a similar computational challenge; both are variables that cannot be simply calculated from the immediately available sensory information. We introduce a simple mathematical framework that computes functions of both spatial location and time as special cases of a more general computation. In this framework, experience unfolding in time is encoded via a set of leaky integrators. These leaky integrators encode the Laplace transform of their input. The information contained in the transform can be recovered using an approximation to the inverse Laplace transform. In the temporal domain, the resulting representation reconstructs the temporal history. By integrating movements, the equations give rise to a representation of the path taken to arrive at the present location. By modulating the transform with information about allocentric velocity, the equations code for position of a landmark. Simulated cells show a close correspondence to neurons observed in various regions for all three cases. In the temporal domain, novel secondary analyses of hippocampal time cells verified several qualitative predictions of the model. An integrated representation of spatiotemporal context can be computed by taking conjunctions of these elemental inputs, leading to a correspondence with conjunctive neural representations observed in dorsal CA1. PMID:24672015

  11. Analysis and Synthesis of Load Forecasting Data for Renewable Integration Studies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steckler, N.; Florita, A.; Zhang, J.

    2013-11-01

    As renewable energy constitutes greater portions of the generation fleet, the importance of modeling uncertainty as part of integration studies also increases. In pursuit of optimal system operations, it is important to capture not only the definitive behavior of power plants, but also the risks associated with systemwide interactions. This research examines the dependence of load forecast errors on external predictor variables such as temperature, day type, and time of day. The analysis was utilized to create statistically relevant instances of sequential load forecasts with only a time series of historic, measured load available. The creation of such load forecastsmore » relies on Bayesian techniques for informing and updating the model, thus providing a basis for networked and adaptive load forecast models in future operational applications.« less

  12. Assessing time-integrated dissolved concentrations and predicting toxicity of metals during diel cycling in streams

    USGS Publications Warehouse

    Balistrieri, Laurie S.; Nimick, David A.; Mebane, Christopher A.

    2012-01-01

    Evaluating water quality and the health of aquatic organisms is challenging in systems with systematic diel (24 hour) or less predictable runoff-induced changes in water composition. To advance our understanding of how to evaluate environmental health in these dynamic systems, field studies of diel cycling were conducted in two streams (Silver Bow Creek and High Ore Creek) affected by historical mining activities in southwestern Montana. A combination of sampling and modeling tools were used to assess the toxicity of metals in these systems. Diffusive Gradients in Thin Films (DGT) samplers were deployed at multiple time intervals during diel sampling to confirm that DGT integrates time-varying concentrations of dissolved metals. Thermodynamic speciation calculations using site specific water compositions, including time-integrated dissolved metal concentrations determined from DGT, and a competitive, multiple-metal biotic ligand model incorporated into the Windemere Humic Aqueous Model Version 6.0 (WHAM VI) were used to determine the chemical speciation of dissolved metals and biotic ligands. The model results were combined with previously collected toxicity data on cutthroat trout to derive a relationship that predicts the relative survivability of these fish at a given site. This integrative approach may prove useful for assessing water quality and toxicity of metals to aquatic organisms in dynamic systems and evaluating whether potential changes in environmental health of aquatic systems are due to anthropogenic activities or natural variability.

  13. Time series analysis for psychological research: examining and forecasting change

    PubMed Central

    Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  14. Time series analysis for psychological research: examining and forecasting change.

    PubMed

    Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.

  15. The magnetosphere as system

    NASA Astrophysics Data System (ADS)

    Siscoe, G. L.

    2012-12-01

    What is a system? A group of elements interacting with each other so as to create feedback loops. A system gets complex as the number of feedback loops increases and as the feedback loops exhibit time delays. Positive and negative feedback loops with time delays can give a system intrinsic time dependence and emergent properties. A system generally has input and output flows of something (matter, energy, money), which, if time variable, add an extrinsic component to its behavior. The magnetosphere is a group of elements interacting through feedback loops, some with time delays, driven by energy and mass inflow from a variable solar wind and outflow into the atmosphere and solar wind. The magnetosphere is a complex system. With no solar wind, there is no behavior. With solar wind, there is behavior from intrinsic and extrinsic causes. As a contribution to taking a macroscopic view of magnetospheric dynamics, to treating the magnetosphere as a globally integrated, complex entity, I will discus the magnetosphere as a system, its feedback loops, time delays, emergent behavior, and intrinsic and extrinsic behavior modes.

  16. Intra-individual variability in information processing speed reflects white matter microstructure in multiple sclerosis.

    PubMed

    Mazerolle, Erin L; Wojtowicz, Magdalena A; Omisade, Antonina; Fisk, John D

    2013-01-01

    Slowed information processing speed is commonly reported in persons with multiple sclerosis (MS), and is typically investigated using clinical neuropsychological tests, which provide sensitive indices of mean-level information processing speed. However, recent studies have demonstrated that within-person variability or intra-individual variability (IIV) in information processing speed may be a more sensitive indicator of neurologic status than mean-level performance on clinical tests. We evaluated the neural basis of increased IIV in mildly affected relapsing-remitting MS patients by characterizing the relation between IIV (controlling for mean-level performance) and white matter integrity using diffusion tensor imaging (DTI). Twenty women with relapsing-remitting MS and 20 matched control participants completed the Computerized Test of Information Processing (CTIP), from which both mean response time and IIV were calculated. Other clinical measures of information processing speed were also collected. Relations between IIV on the CTIP and DTI metrics of white matter microstructure were evaluated using tract-based spatial statistics. We observed slower and more variable responses on the CTIP in MS patients relative to controls. Significant relations between white matter microstructure and IIV were observed for MS patients. Increased IIV was associated with reduced integrity in more white matter tracts than was slowed information processing speed as measured by either mean CTIP response time or other neuropsychological test scores. Thus, despite the common use of mean-level performance as an index of cognitive dysfunction in MS, IIV may be more sensitive to the overall burden of white matter disease at the microstructural level. Furthermore, our study highlights the potential value of considering within-person fluctuations, in addition to mean-level performance, for uncovering brain-behavior relationships in neurologic disorders with widespread white matter pathology.

  17. Heart-Rate Variability—More than Heart Beats?

    PubMed Central

    Ernst, Gernot

    2017-01-01

    Heart-rate variability (HRV) is frequently introduced as mirroring imbalances within the autonomous nerve system. Many investigations are based on the paradigm that increased sympathetic tone is associated with decreased parasympathetic tone and vice versa. But HRV is probably more than an indicator for probable disturbances in the autonomous system. Some perturbations trigger not reciprocal, but parallel changes of vagal and sympathetic nerve activity. HRV has also been considered as a surrogate parameter of the complex interaction between brain and cardiovascular system. Systems biology is an inter-disciplinary field of study focusing on complex interactions within biological systems like the cardiovascular system, with the help of computational models and time series analysis, beyond others. Time series are considered surrogates of the particular system, reflecting robustness or fragility. Increased variability is usually seen as associated with a good health condition, whereas lowered variability might signify pathological changes. This might explain why lower HRV parameters were related to decreased life expectancy in several studies. Newer integrating theories have been proposed. According to them, HRV reflects as much the state of the heart as the state of the brain. The polyvagal theory suggests that the physiological state dictates the range of behavior and psychological experience. Stressful events perpetuate the rhythms of autonomic states, and subsequently, behaviors. Reduced variability will according to this theory not only be a surrogate but represent a fundamental homeostasis mechanism in a pathological state. The neurovisceral integration model proposes that cardiac vagal tone, described in HRV beyond others as HF-index, can mirror the functional balance of the neural networks implicated in emotion–cognition interactions. Both recent models represent a more holistic approach to understanding the significance of HRV. PMID:28955705

  18. Reconciling Land-Ocean Moisture Transport Variability in Reanalyses with P-ET in Observationally-Driven Land Surface Models

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Bosilovich, Michael G.; Roberts, Jason B.

    2016-01-01

    Vertically integrated atmospheric moisture transport from ocean to land [vertically integrated atmospheric moisture flux convergence (VMFC)] is a dynamic component of the global climate system but remains problematic in atmospheric reanalyses, with current estimates having significant multidecadal global trends differing even in sign. Continual evolution of the global observing system, particularly stepwise improvements in satellite observations, has introduced discrete changes in the ability of data assimilation to correct systematic model biases, manifesting as nonphysical variability. Land surface models (LSMs) forced with observed precipitation P and near-surface meteorology and radiation provide estimates of evapotranspiration (ET). Since variability of atmospheric moisture storage is small on interannual and longer time scales, VMFC equals P minus ET is a good approximation and LSMs can provide an alternative estimate. However, heterogeneous density of rain gauge coverage, especially the sparse coverage over tropical continents, remains a serious concern. Rotated principal component analysis (RPCA) with prefiltering of VMFC to isolate the artificial variability is used to investigate artifacts in five reanalysis systems. This procedure, although ad hoc, enables useful VMFC corrections over global land. The P minus ET estimates from seven different LSMs are evaluated and subsequently used to confirm the efficacy of the RPCA-based adjustments. Global VMFC trends over the period 1979-2012 ranging from 0.07 to minus 0.03 millimeters per day per decade are reduced by the adjustments to 0.016 millimeters per day per decade, much closer to the LSM P minus ET estimate (0.007 millimeters per day per decade). Neither is significant at the 90 percent level. ENSO (El Nino-Southern Oscillation)-related modulation of VMFC and P minus ET remains the largest global interannual signal, with mean LSM and adjusted reanalysis time series correlating at 0.86.

  19. Development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms.

    PubMed

    Motamarri, Srinivas; Boccelli, Dominic L

    2012-09-15

    Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Predicting Athletes' Pre-Exercise Fluid Intake: A Theoretical Integration Approach.

    PubMed

    Li, Chunxiao; Sun, Feng-Hua; Zhang, Liancheng; Chan, Derwin King Chung

    2018-05-21

    Pre-exercise fluid intake is an important healthy behavior for maintaining athletes’ sports performances and health. However, athletes’ behavioral adherence to fluid intake and its underlying psychological mechanisms have not been investigated. This prospective study aimed to use a health psychology model that integrates the self-determination theory and the theory of planned behavior for understanding pre-exercise fluid intake among athletes. Participants ( n = 179) were athletes from college sport teams who completed surveys at two time points. Baseline (Time 1) assessment comprised psychological variables of the integrated model (i.e., autonomous and controlled motivation, attitude, subjective norm, perceived behavioral control, and intention) and fluid intake (i.e., behavior) was measured prospectively at one month (Time 2). Path analysis showed that the positive association between autonomous motivation and intention was mediated by subjective norm and perceived behavioral control. Controlled motivation positively predicted the subjective norm. Intentions positively predicted pre-exercise fluid intake behavior. Overall, the pattern of results was generally consistent with the integrated model, and it was suggested that athletes’ pre-exercise fluid intake behaviors were associated with the motivational and social cognitive factors of the model. The research findings could be informative for coaches and sport scientists to promote athletes’ pre-exercise fluid intake behaviors.

  1. Digital computer program for generating dynamic turbofan engine models (DIGTEM)

    NASA Technical Reports Server (NTRS)

    Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.

    1983-01-01

    This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.

  2. Performance of an exhaled nitric oxide and carbon dioxide sensor using quantum cascade laser-based integrated cavity output spectroscopy.

    PubMed

    McCurdy, Matthew R; Bakhirkin, Yury; Wysocki, Gerard; Tittel, Frank K

    2007-01-01

    Exhaled nitric oxide (NO) is an important biomarker in asthma and other respiratory disorders. The optical performance of a NOCO(2) sensor employing integrated cavity output spectroscopy (ICOS) with a quantum cascade laser operating at 5.22 microm capable of real-time NO and CO(2) measurements in a single breath cycle is reported. A NO noise-equivalent concentration of 0.4 ppb within a 1-sec integration time is achieved. The off-axis ICOS sensor performance is compared to a chemiluminescent NO analyzer and a nondispersive infrared (NDIR) CO(2) absorption capnograph. Differences between the gas analyzers are assessed by the Bland-Altman method to estimate the expected variability between the gas sensors. The off-axis ICOS sensor measurements are in good agreement with the data acquired with the two commercial gas analyzers. This work demonstrates the performance characteristics and merits of mid-infrared spectroscopy for exhaled breath analysis.

  3. Analyzing the responses of species assemblages to climate change across the Great Basin, USA.

    NASA Astrophysics Data System (ADS)

    Henareh Khalyani, A.; Falkowski, M. J.; Crookston, N.; Yousef, F.

    2016-12-01

    The potential impacts of climate change on the future distribution of tree species in not well understood. Climate driven changes in tree species distribution could cause significant changes in realized species niches, potentially resulting in the loss of ecotonal species as well as the formation on novel assemblages of overlapping tree species. In an effort to gain a better understating of how the geographic distribution of tree species may respond to climate change, we model the potential future distribution of 50 different tree species across 70 million ha in the Great Basin, USA. This is achieved by leveraging a species realized niche model based on non-parametric analysis of species occurrences across climatic, topographic, and edaphic variables. Spatially explicit, high spatial resolution (30 m) climate variables (e.g., precipitation, and minimum, maximum, and mean temperature) and associated climate indices were generated on an annual basis between 1981-2010 by integrating climate station data with digital elevation data (Shuttle Radar Topographic Mission (SRTM) data) in a thin plate spline interpolation algorithm (ANUSPLIN). Bioclimate models of species niches in in the cotemporary period and three following 30 year periods were then generated by integrating the climate variables, soil data, and CMIP 5 general circulation model projections. Our results suggest that local scale contemporary variations in species realized niches across space are influenced by edaphic and topographic variables as well as climatic variables. The local variability in soil properties and topographic variability across space also affect the species responses to climate change through time and potential formation of species assemblages in future. The results presented here in will aid in the development of adaptive forest management techniques aimed at mitigating negative impacts of climate change on forest composition, structure, and function.

  4. The integration level of public transportation in Makassar City

    NASA Astrophysics Data System (ADS)

    Kasim, A. M. R.; Wicaksono, A. D.; Kurniawan, E. B.

    2017-06-01

    Multimodal transportation is transportation modes that can combine one with the other modes properly, efficiently and effectively so that people can move from one type of transportation to another modes with fast, cheap and convenient. The integration of transport services and infrastructure network is not fully realized, among others, can be seen from the public transport service between modes, one with the other modes, which makes people have to travel by other modes and is not served by public transport canal. Furthermore, intramoda displacement can not be done easily and quickly, some things that makes people tend to prefer private rather than public transport vehicles in doing movement. The main objective of this study was to determine the level of integration of modes of land transportation in the city of Makassar. By using analysis of physical alignment, non-physical, and analysis of travel time, which is then summarized into a canvas ratings to get an idea of the level of integration in general. The results showed that the level of integration of modes consist of two criteria with a very low value, 4 criteria with low and medium value, two criteria with a high value, as well as one criterion with very high value. Meanwhile, the variable of integration that influence people’s preferences in order from the highest value is the number of the fleet, the availability of routes, the number of passengers compared with the load factor, the location where the stopping points up and down, and the last is the location of the terminal with the place up and down the number of the fleet, availability of service, number of passengers, the location of the terminal to place up and down the passenger, as well as the location where the up and down passengers on the origin and destination. Some of the variables that have a low and very low value, is a variable that should receive greater attention from the municipality, so the concept of the integration of transport modes could be reached, and indirectly will make the public are encouraged to use public transport.

  5. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  6. Evaluation of measles and rubella integrated surveillance system in Apulia region, Italy, 3 years after its introduction.

    PubMed

    Turiac, I A; Fortunato, F; Cappelli, M G; Morea, A; Chironna, M; Prato, Rosa; Martinelli, D

    2018-04-01

    This study aimed at evaluating the integrated measles and rubella surveillance system (IMRSS) in Apulia region, Italy, from its introduction in 2013 to 30 June 2016. Measles and rubella case reports were extracted from IMRSS. We estimated system sensitivity at the level of case reporting, using the capture-recapture method for three data sources. Data quality was described as the completeness of variables and timeliness of notification as the median-time interval from symptoms onset to initial alert. The proportion of suspected cases with laboratory investigation, the rate of discarded cases and the origin of infection were also computed. A total of 127 measles and four rubella suspected cases were reported to IMRSS and 82 were laboratory confirmed. Focusing our analysis on measles, IMRSS sensitivity was 82% (95% CI: 75-87). Completeness was >98% for mandatory variables and 57% for 'genotyping'. The median-time interval from symptoms onset to initial alert was 4.5 days, with a timeliness of notification of 33% (41 cases reported ⩽48 h). The proportion of laboratory investigation was 87%. The rate of discarded cases was 0.1 per 100 000 inhabitants per year. The origin of infection was identified for 85% of cases. It is concluded that IMRSS provides good quality data and has good sensitivity; still efforts should be made to improve the completeness of laboratory-related variables, timeliness and to increase the rate of discarded cases.

  7. Minimum time and fuel flight profiles for an F-15 airplane with a Highly Integrated Digital Electronic Control (HIDEC) system

    NASA Technical Reports Server (NTRS)

    Haering, E. A., Jr.; Burcham, F. W., Jr.

    1984-01-01

    A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.

  8. Ionospheric responses during equinox and solstice periods over Turkey

    NASA Astrophysics Data System (ADS)

    Karatay, Secil; Cinar, Ali; Arikan, Feza

    2017-11-01

    Ionospheric electron density is the determining variable for investigation of the spatial and temporal variations in the ionosphere. Total Electron Content (TEC) is the integral of the electron density along a ray path that indicates the total variability through the ionosphere. Global Positioning System (GPS) recordings can be utilized to estimate the TEC, thus GPS proves itself as a useful tool in monitoring the total variability of electron distribution within the ionosphere. This study focuses on the analysis of the variations of ionosphere over Turkey that can be grouped into anomalies during equinox and solstice periods using TEC estimates obtained by a regional GPS network. It is observed that noon time depletions in TEC distributions predominantly occur in winter for minimum Sun Spots Numbers (SSN) in the central regions of Turkey which also exhibit high variability due to midlatitude winter anomaly. TEC values and ionospheric variations at solstice periods demonstrate significant enhancements compared to those at equinox periods.

  9. The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth system models

    NASA Astrophysics Data System (ADS)

    Eichinger, R.; Jöckel, P.

    2014-07-01

    The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.

  10. The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth System Models

    NASA Astrophysics Data System (ADS)

    Eichinger, R.; Jöckel, P.

    2014-04-01

    The tendencies of prognostic variables in Earth System Models are usually only accessible, e.g., for output, as sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover a standard interface allows the access to the individual process tendencies by other submodels, e.g., for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the models susceptibility. TENDENCY is independent of the time integration scheme and therefore applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane-oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.

  11. Empirical Analysis of the Variability of Wind Generation in India: Implications for Grid Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phadke, Amol; Abhyankar, NIkit; Rao, Poorvi

    We analyze variability in load and wind generation in India to assess its implications for grid integration of large scale wind projects using actual wind generation and load data from two states in India, Karnataka and Tamil Nadu. We compare the largest variations in load and net load (load ?wind, i.e., load after integrating wind) that the generation fleet has to meet. In Tamil Nadu, where wind capacity is about 53percent of the peak demand, we find that the additional variation added due to wind over the current variation in load is modest; if wind penetration reaches 15percent and 30percentmore » by energy, the additional hourly variation is less than 0.5percent and 4.5percent of the peak demand respectively for 99percent of the time. For wind penetration of 15percent by energy, Tamil Nadu system is found to be capable of meeting the additional ramping requirement for 98.8percent of the time. Potential higher uncertainty in net load compared to load is found to have limited impact on ramping capability requirements of the system if coal plants can me ramped down to 50percent of their capacity. Load and wind aggregation in Tamil Nadu and Karnataka is found to lower the variation by at least 20percent indicating the benefits geographic diversification. These findings suggest modest additional flexible capacity requirements and costs for absorbing variation in wind power and indicate that the potential capacity support (if wind does not generate enough during peak periods) may be the issue that has more bearing on the economics of integrating wind« less

  12. Worldlines and worldsheets for non-abelian lattice field theories: Abelian color fluxes and Abelian color cycles

    NASA Astrophysics Data System (ADS)

    Gattringer, Christof; Göschl, Daniel; Marchis, Carlotta

    2018-03-01

    We discuss recent developments for exact reformulations of lattice field theories in terms of worldlines and worldsheets. In particular we focus on a strategy which is applicable also to non-abelian theories: traces and matrix/vector products are written as explicit sums over color indices and a dual variable is introduced for each individual term. These dual variables correspond to fluxes in both, space-time and color for matter fields (Abelian color fluxes), or to fluxes in color space around space-time plaquettes for gauge fields (Abelian color cycles). Subsequently all original degrees of freedom, i.e., matter fields and gauge links, can be integrated out. Integrating over complex phases of matter fields gives rise to constraints that enforce conservation of matter flux on all sites. Integrating out phases of gauge fields enforces vanishing combined flux of matter-and gauge degrees of freedom. The constraints give rise to a system of worldlines and worldsheets. Integrating over the factors that are not phases (e.g., radial degrees of freedom or contributions from the Haar measure) generates additional weight factors that together with the constraints implement the full symmetry of the conventional formulation, now in the language of worldlines and worldsheets. We discuss the Abelian color flux and Abelian color cycle strategies for three examples: the SU(2) principal chiral model with chemical potential coupled to two of the Noether charges, SU(2) lattice gauge theory coupled to staggered fermions, as well as full lattice QCD with staggered fermions. For the principal chiral model we present some simulation results that illustrate properties of the worldline dynamics at finite chemical potentials.

  13. Neural substrates of behavioral variability in attention deficit hyperactivity disorder: based on ex-Gaussian reaction time distribution and diffusion spectrum imaging tractography.

    PubMed

    Lin, H-Y; Gau, S S-F; Huang-Gu, S L; Shang, C-Y; Wu, Y-H; Tseng, W-Y I

    2014-06-01

    Increased intra-individual variability (IIV) in reaction time (RT) across various tasks is one ubiquitous neuropsychological finding in attention deficit hyperactivity disorder (ADHD). However, neurobiological underpinnings of IIV in individuals with ADHD have not yet been fully delineated. The ex-Gaussian distribution has been proved to capture IIV in RT. The authors explored the three parameters [μ (mu), σ (sigma), τ (tau)] of an ex-Gaussian RT distribution derived from the Conners' continuous performance test (CCPT) and their correlations with the microstructural integrity of the frontostriatal-caudate tracts and the cingulum bundles. We assessed 28 youths with ADHD (8-17 years; 25 males) and 28 age-, sex-, IQ- and handedness-matched typically developing (TD) youths using the CCPT, Wechsler Intelligence Scale for Children, 3rd edition and magnetic resonance imaging (MRI). Microstructural integrity, indexed by generalized fractional anisotropy (GFA), was measured by diffusion spectrum imaging tractrography on a 3-T MRI system. Youths with ADHD had larger σ (s.d. of Gaussian distribution) and τ (mean of exponential distribution) and reduced GFA in four bilateral frontostriatal tracts. With increased inter-stimulus intervals of CCPT, the magnitude of greater τ in ADHD than TD increased. In ADHD youths, the cingulum bundles and frontostriatal integrity were associated with three ex-Gaussian parameters and with μ (mean of Gaussian distribution) and τ, respectively; while only frontostriatal GFA was associated with μ and τ in TD youths. Our findings suggest the crucial role of the integrity of the cingulum bundles in accounting for IIV in ADHD. Involvement of different brain systems in mediating IIV may relate to a distinctive pathophysiological processing and/or adaptive compensatory mechanism.

  14. A Numerical Scheme for Ordinary Differential Equations Having Time Varying and Nonlinear Coefficients Based on the State Transition Matrix

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2002-01-01

    A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.

  15. Circulation and rainfall climatology of a 10-year (1979 - 1988) integration with the Goddard Laboratory for atmospheres general circulation model

    NASA Technical Reports Server (NTRS)

    Kim, J.-H.; Sud, Y. C.

    1993-01-01

    A 10-year (1979-1988) integration of Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) under Atmospheric Model Intercomparison Project (AMIP) is analyzed and compared with observation. The first momentum fields of circulation variables and also hydrological variables including precipitation, evaporation, and soil moisture are presented. Our goals are (1) to produce a benchmark documentation of the GLA GCM for future model improvements; (2) to examine systematic errors between the simulated and the observed circulation, precipitation, and hydrologic cycle; (3) to examine the interannual variability of the simulated atmosphere and compare it with observation; and (4) to examine the ability of the model to capture the major climate anomalies in response to events such as El Nino and La Nina. The 10-year mean seasonal and annual simulated circulation is quite reasonable compared to the analyzed circulation, except the polar regions and area of high orography. Precipitation over tropics are quite well simulated, and the signal of El Nino/La Nina episodes can be easily identified. The time series of evaporation and soil moisture in the 12 biomes of the biosphere also show reasonable patterns compared to the estimated evaporation and soil moisture.

  16. An Algorithm for Integrated Subsystem Embodiment and System Synthesis

    NASA Technical Reports Server (NTRS)

    Lewis, Kemper

    1997-01-01

    Consider the statement,'A system has two coupled subsystems, one of which dominates the design process. Each subsystem consists of discrete and continuous variables, and is solved using sequential analysis and solution.' To address this type of statement in the design of complex systems, three steps are required, namely, the embodiment of the statement in terms of entities on a computer, the mathematical formulation of subsystem models, and the resulting solution and system synthesis. In complex system decomposition, the subsystems are not isolated, self-supporting entities. Information such as constraints, goals, and design variables may be shared between entities. But many times in engineering problems, full communication and cooperation does not exist, information is incomplete, or one subsystem may dominate the design. Additionally, these engineering problems give rise to mathematical models involving nonlinear functions of both discrete and continuous design variables. In this dissertation an algorithm is developed to handle these types of scenarios for the domain-independent integration of subsystem embodiment, coordination, and system synthesis using constructs from Decision-Based Design, Game Theory, and Multidisciplinary Design Optimization. Implementation of the concept in this dissertation involves testing of the hypotheses using example problems and a motivating case study involving the design of a subsonic passenger aircraft.

  17. Integrated care for frail elderly compared to usual care: a study protocol of a quasi-experiment on the effects on the frail elderly, their caregivers, health professionals and health care costs.

    PubMed

    Fabbricotti, Isabelle Natalina; Janse, Benjamin; Looman, Wilhelmina Mijntje; de Kuijper, Ruben; van Wijngaarden, Jeroen David Hendrikus; Reiffers, Auktje

    2013-04-12

    Frail elderly persons living at home are at risk for mental, psychological, and physical deterioration. These problems often remain undetected. If care is given, it lacks the quality and continuity required for their multiple and changing problems. The aim of this project is to improve the quality and efficacy of care given to frail elderly living independently by implementing and evaluating a preventive integrated care model for the frail elderly. The design is quasi-experimental. Effects will be measured by conducting a before and after study with control group. The experimental group will consist of 220 elderly of 8 GPs (General Practitioners) who will provide care according to the integrated model (The Walcheren Integrated Care Model). The control group will consist of 220 elderly of 6 GPs who will give care as usual. The study will include an evaluation of process and outcome measures for the frail elderly, their caregivers and health professionals as well as a cost-effectiveness analysis. A concurrent mixed methods design will be used. The study population will consist of elderly 75 years or older who live independently and score a 4 or higher on the Groningen Frailty Indicator, their caregivers and health professionals. Data will be collected prospectively at three points in time: T0, T1 (3 months after inclusion), and T2 (12 months after inclusion). Similarities between the two groups and changes over time will be assessed with t-tests and chi-square tests. For each measure regression analyses will be performed with the T2-score as the dependent variable and the T0-score, the research group and demographic variables as independent variables. A potential obstacle for this study will be the willingness of the elderly and their caregivers to participate. To increase willingness, the request to participate will be sent via the elders' own GP. Interviewers will be from their local region and gifts will be given. A successful implementation of the integrated model is also necessary. The involved parties are members of a steering group and have contractually committed themselves to the project. Current Controlled Trials ISRCTN05748494.

  18. Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples

    NASA Astrophysics Data System (ADS)

    Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.

    2014-12-01

    Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.

  19. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/ hyperactivity disorder.

    PubMed

    Wolfers, Thomas; Onnink, A Marten H; Zwiers, Marcel P; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C; Kan, Cornelis C; Slaats-Willemse, Dorine; Buitelaar, Jan K; Franke, Barbara

    2015-09-01

    Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging-derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD.

  20. Lower white matter microstructure in the superior longitudinal fasciculus is associated with increased response time variability in adults with attention-deficit/hyperactivity disorder

    PubMed Central

    Wolfers, Thomas; Onnink, A. Marten H.; Zwiers, Marcel P.; Arias-Vasquez, Alejandro; Hoogman, Martine; Mostert, Jeanette C.; Kan, Cornelis C.; Slaats-Willemse, Dorine; Buitelaar, Jan K.; Franke, Barbara

    2015-01-01

    Background Response time variability (RTV) is consistently increased in patients with attention-deficit/hyperactivity disorder (ADHD). A right-hemispheric frontoparietal attention network model has been implicated in these patients. The 3 main connecting fibre tracts in this network, the superior longitudinal fasciculus (SLF), inferior longitudinal fasciculus (ILF) and the cingulum bundle (CB), show microstructural abnormalities in patients with ADHD. We hypothesized that the microstructural integrity of the 3 white matter tracts of this network are associated with ADHD and RTV. Methods We examined RTV in adults with ADHD by modelling the reaction time distribution as an exponentially modified Gaussian (ex-Gaussian) function with the parameters μ, σ and τ, the latter of which has been attributed to lapses of attention. We assessed adults with ADHD and healthy controls using a sustained attention task. Diffusion tensor imaging–derived fractional anisotropy (FA) values were determined to quantify bilateral microstructural integrity of the tracts of interest. Results We included 100 adults with ADHD and 96 controls in our study. Increased τ was associated with ADHD diagnosis and was linked to symptoms of inattention. An inverse correlation of τ with mean FA was seen in the right SLF of patients with ADHD, but no direct association between the mean FA of the 6 regions of interest with ADHD could be observed. Limitations Regions of interest were defined a priori based on the attentional network model for ADHD and thus we might have missed effects in other networks. Conclusion This study suggests that reduced microstructural integrity of the right SLF is associated with elevated τ in patients with ADHD. PMID:26079698

  1. Day-by-Day Variability of Home Blood Pressure and Incident Cardiovascular Disease in Clinical Practice: The J-HOP Study (Japan Morning Surge-Home Blood Pressure).

    PubMed

    Hoshide, Satoshi; Yano, Yuichiro; Mizuno, Hiroyuki; Kanegae, Hiroshi; Kario, Kazuomi

    2018-01-01

    We assessed the relationship between day-by-day home blood pressure (BP) variability and incident cardiovascular disease (CVD) in clinical practice. J-HOP study (Japan Morning Surge-Home Blood Pressure) participants underwent home BP monitoring in the morning and evening for a 14-day period, and their BP levels and BP variability independent of the mean (VIM) were assessed. Incident CVD events included coronary heart disease and stroke. Cox models were fitted to assess the home BP variability-CVD risk association. Among 4231 participants (mean±SD age, 64.9±10.9 years; 53.3% women; 79.1% taking antihypertensive medication), mean (SD) home systolic BP (SBP) levels over time and VIM SBP were 134.2 (14.3) and 6.8 (2.5) mm Hg, respectively. During a 4-year follow-up period (16 750.3 person-years), 148 CVD events occurred. VIM SBP was associated with CVD risk (hazard ratio per 1-SD increase, 1.32; 95% confidence interval [CI], 1.15-1.52), independently of mean home SBP levels over time and circulating B-type natriuretic peptide levels or urine albumin-to-creatinine ratio. Adding VIM SBP to the CVD prediction model improved the discrimination (C statistic, 0.785 versus 0.770; C statistic difference, 0.015; 95% CI, 0.003-0.028). Changes in continuous net reclassification improvement (0.259; 95% CI, 0.052-0.537), absolute integrated discrimination improvement (0.010; 95% CI, 0.003-0.016), and relative integrated discrimination improvement (0.104; 95% CI, 0.037-0.166) were also observed with the addition of VIM SBP to the CVD prediction models. In addition to the assessments of mean home SBP levels and cardiovascular end-organ damage, home BP variability measurements may provide a clinically useful distinction between high- and low-risk groups among Japanese outpatients. © 2017 American Heart Association, Inc.

  2. Applying probabilistic temporal and multisite data quality control methods to a public health mortality registry in Spain: a systematic approach to quality control of repositories.

    PubMed

    Sáez, Carlos; Zurriaga, Oscar; Pérez-Panadés, Jordi; Melchor, Inma; Robles, Montserrat; García-Gómez, Juan M

    2016-11-01

    To assess the variability in data distributions among data sources and over time through a case study of a large multisite repository as a systematic approach to data quality (DQ). Novel probabilistic DQ control methods based on information theory and geometry are applied to the Public Health Mortality Registry of the Region of Valencia, Spain, with 512 143 entries from 2000 to 2012, disaggregated into 24 health departments. The methods provide DQ metrics and exploratory visualizations for (1) assessing the variability among multiple sources and (2) monitoring and exploring changes with time. The methods are suited to big data and multitype, multivariate, and multimodal data. The repository was partitioned into 2 probabilistically separated temporal subgroups following a change in the Spanish National Death Certificate in 2009. Punctual temporal anomalies were noticed due to a punctual increment in the missing data, along with outlying and clustered health departments due to differences in populations or in practices. Changes in protocols, differences in populations, biased practices, or other systematic DQ problems affected data variability. Even if semantic and integration aspects are addressed in data sharing infrastructures, probabilistic variability may still be present. Solutions include fixing or excluding data and analyzing different sites or time periods separately. A systematic approach to assessing temporal and multisite variability is proposed. Multisite and temporal variability in data distributions affects DQ, hindering data reuse, and an assessment of such variability should be a part of systematic DQ procedures. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Atlantic salmon (Salmo salar) smolt production: the relative importance of survival and body growth

    USGS Publications Warehouse

    Horton, G.E.; Letcher, B.H.; Bailey, M.M.; Kinnison, M.T.

    2009-01-01

    The complex life history of Atlantic salmon (Salmo salar) coupled with interacting abiotic and biotic factors leads to extreme demographic variability across the species' range. Our goal was to evaluate the relative importance of survival and body growth in determining smolt production across space and time. We used passive integrated transponder tags and capture-mark-recapture analyses to estimate survival, emigration, and growth for six cohorts of presmolt Atlantic salmon in two streams (three cohorts per stream) in New England, USA. We observed remarkable among-cohort consistency in mean monthly survival during a 17-month period from age-0+ autumn to age-2+ spring yet high variability in monthly survival over shorter time intervals (seasons). Despite this latter variability, survival did not translate into amongcohort differences in proportions of age-2+ versus age-3+ smolts. Alternatively, the high variability across seasons and cohorts in mean individual growth rate did lead to differences in within-cohort proportions of age-2+ versus age-3+ smolts (regardless of stream). We conclude that in our two small study streams, variability in growth and size impacted smolt age and, ultimately, smolt production. Density-dependent effects on growth at the scale of the entire study site represent a possible mechanism underlying our observations.

  4. Natural wind variability triggered drop in German redispatch volume and costs from 2015 to 2016

    PubMed Central

    Reyers, Mark; Märker, Carolin; Witthaut, Dirk

    2018-01-01

    Avoiding dangerous climate change necessitates the decarbonization of electricity systems within the next few decades. In Germany, this decarbonization is based on an increased exploitation of variable renewable electricity sources such as wind and solar power. While system security has remained constantly high, the integration of renewables causes additional costs. In 2015, the costs of grid management saw an all time high of about € 1 billion. Despite the addition of renewable capacity, these costs dropped substantially in 2016. We thus investigate the effect of natural climate variability on grid management costs in this study. We show that the decline is triggered by natural wind variability focusing on redispatch as a main cost driver. In particular, we find that 2016 was a weak year in terms of wind generation averages and the occurrence of westerly circulation weather types. Moreover, we show that a simple model based on the wind generation time series is skillful in detecting redispatch events on timescales of weeks and beyond. As a consequence, alterations in annual redispatch costs in the order of hundreds of millions of euros need to be understood and communicated as a normal feature of the current system due to natural wind variability. PMID:29329349

  5. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quock, D. E. R.; Cianciarulo, M. B.; APS Engineering Support Division

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, themore » necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.« less

  6. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  7. Modeling Crustal Deformation Due to the Landers, Hector Mine Earthquakes Using the SCEC Community Fault Model

    NASA Astrophysics Data System (ADS)

    Gable, C. W.; Fialko, Y.; Hager, B. H.; Plesch, A.; Williams, C. A.

    2006-12-01

    More realistic models of crustal deformation are possible due to advances in measurements and modeling capabilities. This study integrates various data to constrain a finite element model of stress and strain in the vicinity of the 1992 Landers earthquake and the 1999 Hector Mine earthquake. The geometry of the model is designed to incorporate the Southern California Earthquake Center (SCEC), Community Fault Model (CFM) to define fault geometry. The Hector Mine fault is represented by a single surface that follows the trace of the Hector Mine fault, is vertical and has variable depth. The fault associated with the Landers earthquake is a set of seven surfaces that capture the geometry of the splays and echelon offsets of the fault. A three dimensional finite element mesh of tetrahedral elements is built that closely maintains the geometry of these fault surfaces. The spatially variable coseismic slip on faults is prescribed based on an inversion of geodetic (Synthetic Aperture Radar and Global Positioning System) data. Time integration of stress and strain is modeled with the finite element code Pylith. As a first step the methodology of incorporating all these data is described. Results of the time history of the stress and strain transfer between 1992 and 1999 are analyzed as well as the time history of deformation from 1999 to the present.

  8. Current integrated cardiothoracic surgery residents: a Thoracic Surgery Residents Association survey.

    PubMed

    Tchantchaleishvili, Vakhtang; LaPar, Damien J; Stephens, Elizabeth H; Berfield, Kathleen S; Odell, David D; DeNino, Walter F

    2015-03-01

    After approval by the Thoracic Surgery Residency Review Committee in 2007, 6-year integrated cardiothoracic surgery (I-6) residency programs have gained in popularity. We sought to assess and objectively quantify the level of satisfaction I-6 residents have with their training and to identify areas of improvement for future curriculum development. A completely anonymous, electronic survey was created by the Thoracic Surgery Residents Association that asked the responders to provide demographic information, specialty interest, and lifestyle priorities, and to rate their experience and satisfaction with I-6 residency. The survey was distributed nationwide to all residents in I-6 programs approved by the Accreditation Council for Graduate Medical Education. Of a total of 88 eligible I-6 residents, 49 completed the survey (55.7%). Career choice satisfaction was high (75.5%), as was overall satisfaction with integrated training (83.7%). The majority (77.6%) were interested in cardiac surgery. Overall, the responders reported sufficient time for life outside of the hospital (57.1%), but experienced conflicts between work obligations and personal life at least sometimes (75.5%). Early exposure to cardiothoracic surgery was reported as the dominant advantage of the I-6 model, whereas variable curriculum structure and unclear expectations along with poor integration with general surgery training ranked highest among perceived disadvantages. Current I-6 residents are largely satisfied with the integrated training model and report a reasonable work/life balance. The focused nature of training is the primary perceived advantage of the integrated pathway. Curriculum variability and poor integration with general surgery training are identified by residents as primary areas of concern. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Physical and property victimization behind bars: a multilevel examination.

    PubMed

    Lahm, Karen F

    2009-06-01

    The majority of the extant literature on inmate victimization considers only one level of analysis, thus ignoring the interaction effects between inmate- and prison-level variables. To extend this literature, multilevel modeling techniques were used to analyze self-report data from more than 1,000 inmates and 30 prisons in Kentucky, Tennessee, and Ohio. Results revealed that demographic variables were strong predictors of physical victimization (i.e., race and assaultive behavior). Also, security level had a contextual direct effect on physical victimization. Property victimization was best explained with an integrated model including inmate (i.e., race, assaultive behavior, prior education, prior employment, and time served), contextual (i.e., security level and proportion non-White), and micro-macro interaction variables (i.e., Race x Security Level). Policy implications and suggestions for future research are discussed.

  10. Seasonal variability of light availability and utilization in the Sargasso Sea

    NASA Technical Reports Server (NTRS)

    Siegel, David A.; Michaels, Anthony F.; Sorensen, Jens C.; O'Brein, Margaret C.; Hammer, Melodie A.

    1995-01-01

    A 2 year time series of optical, biogeochemical, and physical parameters, taken near the island of Bermuda, is used to evaluate the sources of temporal variability in light avaliability and utilization in the Sargasso Sea. Integrated assessments of light availability are made by examining the depth of constant percent incident photosynthetically available radiation (% PAR) isolumes. To first order, changes in the depth %PAR isolumes were caused by physical processes: deep convection mixing in the winter which led to the spring bloom and concurrent shallowing of %PAR depths and the occurrence of anomalous thermohaline water masses during the summer and fall seasons. Spectral light availability variations are assessed using determinations of diffuse attenuation coefficient spectra which illustrates a significant seasonal cycle in colored detrital particulate and/or dissolved materials that is unrelated to changes in chlorophyll pigment concentrations. Temporal variations in the photosynthetic light utilization index Psi are used to assess vertically intergrated light utilization variations. Values of Psi are highly variable and show no apparent seasonal pattern which indicates that Psi is not simply a 'biogeochemical constant.' Determinations of in situ primary production rates and daily mean PAR fluxes are used to diagnose the relative role of light limitation in determining vertically integrated rates of primary production integral PP. The mean depth of the light-saturated zone (the vertical region where the daily mean PAR flux was greater than or equal to the saturation irradiance) is only approximately 40 m, although more than one half of interal PP occurred within this zone. Production model results illustrate that accurate predictions of integral PP are dependent upon rates of light-saturated production rather than upon indices of light limitation. It seems unlikely that significant improvements in simple primary production models will come from the partitioning of the Earth's seas into biogeochemical provinces.

  11. Technology integration performance assessment using lean principles in health care.

    PubMed

    Rico, Florentino; Yalcin, Ali; Eikman, Edward A

    2015-01-01

    This study assesses the impact of an automated infusion system (AIS) integration at a positron emission tomography (PET) center based on "lean thinking" principles. The authors propose a systematic measurement system that evaluates improvement in terms of the "8 wastes." This adaptation to the health care context consisted of performance measurement before and after integration of AIS in terms of time, utilization of resources, amount of materials wasted/saved, system variability, distances traveled, and worker strain. The authors' observations indicate that AIS stands to be very effective in a busy PET department, such as the one in Moffitt Cancer Center, owing to its accuracy, pace, and reliability, especially after the necessary adjustments are made to reduce or eliminate the source of errors. This integration must be accompanied by a process reengineering exercise to realize the full potential of AIS in reducing waste and improving patient care and worker satisfaction. © The Author(s) 2014.

  12. The System of Secondary Periodicities and Resonances Based on β Lyrae Magnetic Field

    NASA Astrophysics Data System (ADS)

    Skulsky, M. Yu.

    Original integral interconsistent and interconnected magnetohydrodynamical system of periodicities and resonances over their long-time variabilities is developed. The study is based upon three different observed secondary periods in β Lyrae system and taking into account geometrical features of the nonstandard magnetic field in a losing star, as well as due to the asynchronizm of the orbital and rotational periods.

  13. INM Integrated Noise Model Version 2. Programmer’s Guide

    DTIC Science & Technology

    1979-09-01

    cost, turnaround time, and system-dependent limitations. 3.2 CONVERSION PROBLEMS Item Item Item No. Desciption Category 1 BLOCK DATA Initialization IBM ...Restricted 2 Boolean Operations Differences Call Statement Parameters Extensions 4 Data Initialization IBM Restricted 5 ENTRY Differences 6 EQUIVALENCE...Machine Dependent 7 Format: A CDC Extension 8 Hollerith Strings IBM Restricted 9 Hollerith Variables IBM Restricted 10 Identifier Names CDC Extension

  14. Distribution Planning: An Integration of Constraint Satisfaction & Heuristic Search Techniques

    DTIC Science & Technology

    1990-01-01

    Proceedings of the Symposium on Aritificial Intelligence in ~~litary Logistics, Arlington, VA: American Defense Preparedness Assoc. pp. 177-182...dynamic changes, too many variables, and lack pf planning time. The Human Engineeri n ~ Laboratory (HEL) is developing artificial intelligence (AI...first attempt. The field of artificial intelligence includes a variety of knowledge-based approaches. Most widely known are Expert Systems, that are

  15. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Distributed robust finite-time nonlinear consensus protocols for multi-agent systems

    NASA Astrophysics Data System (ADS)

    Zuo, Zongyu; Tie, Lin

    2016-04-01

    This paper investigates the robust finite-time consensus problem of multi-agent systems in networks with undirected topology. Global nonlinear consensus protocols augmented with a variable structure are constructed with the aid of Lyapunov functions for each single-integrator agent dynamics in the presence of external disturbances. In particular, it is shown that the finite settling time of the proposed general framework for robust consensus design is upper bounded for any initial condition. This makes it possible for network consensus problems to design and estimate the convergence time offline for a multi-agent team with a given undirected information flow. Finally, simulation results are presented to demonstrate the performance and effectiveness of our finite-time protocols.

  17. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  18. Brain noise is task dependent and region specific.

    PubMed

    Misić, Bratislav; Mills, Travis; Taylor, Margot J; McIntosh, Anthony R

    2010-11-01

    The emerging organization of anatomical and functional connections during human brain development is thought to facilitate global integration of information. Recent empirical and computational studies have shown that this enhanced capacity for information processing enables a diversified dynamic repertoire that manifests in neural activity as irregularity and noise. However, transient functional networks unfold over multiple time, scales and the embedding of a particular region depends not only on development, but also on the manner in which sensory and cognitive systems are engaged. Here we show that noise is a facet of neural activity that is also sensitive to the task context and is highly region specific. Children (6-16 yr) and adults (20-41 yr) performed a one-back face recognition task with inverted and upright faces. Neuromagnetic activity was estimated at several hundred sources in the brain by applying a beamforming technique to the magnetoencephalogram (MEG). During development, neural activity became more variable across the whole brain, with most robust increases in medial parietal regions, such as the precuneus and posterior cingulate cortex. For young children and adults, activity evoked by upright faces was more variable and noisy compared with inverted faces, and this effect was reliable only in the right fusiform gyrus. These results are consistent with the notion that upright faces engender a variety of integrative neural computations, such as the relations among facial features and their holistic constitution. This study shows that transient changes in functional integration modulated by task demand are evident in the variability of regional neural activity.

  19. Studies of Hot Photoluminescence in Plasmonically Coupled Silicon via Variable Energy Excitation and Temperature-Dependent Spectroscopy

    PubMed Central

    2015-01-01

    By integrating silicon nanowires (∼150 nm diameter, 20 μm length) with an Ω-shaped plasmonic nanocavity, we are able to generate broadband visible luminescence, which is induced by high order hybrid nanocavity-surface plasmon modes. The nature of this super bandgap emission is explored via photoluminescence spectroscopy studies performed with variable laser excitation energies (1.959 to 2.708 eV) and finite difference time domain simulations. Furthermore, temperature-dependent photoluminescence spectroscopy shows that the observed emission corresponds to radiative recombination of unthermalized (hot) carriers as opposed to a resonant Raman process. PMID:25120156

  20. Integrating High Levels of Variable Renewable Energy into Electric Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin D.

    As more variable renewable energy is integrated into electric power systems, there are a range of challenges and solutions to accommodating very high penetration levels. This presentation highlights some of the recent research in this area.

  1. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  2. Orbit and uncertainty propagation: a comparison of Gauss-Legendre-, Dormand-Prince-, and Chebyshev-Picard-based approaches

    NASA Astrophysics Data System (ADS)

    Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.

    2014-01-01

    We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.

  3. Connecting the surface to near-shore bottom waters in the California Current ecosystem: a study of Northern California interannual to decadal oceanographic variability

    NASA Astrophysics Data System (ADS)

    Fish, C.; Hill, T. M.; Davis, C. V.; Lipski, D.; Jahncke, J.

    2017-12-01

    Elucidating both surface and bottom water ecosystem impacts of temperature change, acidification, and food web disruption are needed to understand anthropogenic processes in the ocean. The Applied California Current Ecosystem Studies (ACCESS) partnership surveys the California Current within the Greater Farallones and Cordell Bank National Marine Sanctuaries three times annually, sampling water column hydrography and discrete water samples from 0 m and 200 m depth at five stations along three primary transects. The transects span the continental shelf with stations as close as 13 km from the coastline to 65 km. This time series extends from 2004 to 2017, integrating information on climate, productivity, zooplankton abundance, oxygenation, and carbonate chemistry. We focus on the interpretation of the 2012-2017 carbonate chemistry data and present both long term trends over the duration of the time series as well as shorter term variability (e.g., ENSO, `warm blob' conditions) to investigate the region's changing oceanographic conditions. For example, we document oscillations in carbonate chemistry, oxygenation, and foraminiferal abundance in concert with interannual oceanographic variability and seasonal (upwelling) cycles. We concentrate on results from near Cordell Bank that potentially impact deep sea coral ecosystems.

  4. Connections between residence time distributions and watershed characteristics across the continental US

    NASA Astrophysics Data System (ADS)

    Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.

    2016-12-01

    Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.

  5. How to Integrate Variable Power Source into a Power Grid

    NASA Astrophysics Data System (ADS)

    Asano, Hiroshi

    This paper discusses how to integrate variable power source such as wind power and photovoltaic generation into a power grid. The intermittent renewable generation is expected to penetrate for less carbon intensive power supply system, but it causes voltage control problem in the distribution system, and supply-demand imbalance problem in a whole power system. Cooperative control of customers' energy storage equipment such as water heater with storage tank for reducing inverse power flow from the roof-top PV system, the operation technique using a battery system and the solar radiation forecast for stabilizing output of variable generation, smart charging of plug-in hybrid electric vehicles for load frequency control (LFC), and other methods to integrate variable power source with improving social benefits are surveyed.

  6. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    PubMed Central

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy). PMID:27555816

  7. Role of Möbius constants and scattering functions in Cachazo-He-Yuan scalar amplitudes

    NASA Astrophysics Data System (ADS)

    Lam, C. S.; Yao, York-Peng

    2016-05-01

    The integration over the Möbius variables leading to the Cachazo-He-Yuan double-color n -point massless scalar amplitude are carried out one integral at a time. Möbius invariance dictates the final amplitude to be independent of the three Möbius constants σr,σs,σt, but their choice affects integrations and the intermediate results. The effect of the Möbius constants, which will be held finite but otherwise arbitrary, the two sets of colors, and the scattering functions on each integration is investigated. A general systematic way to carry out the n -3 integrations is explained, each exposing one of the n -3 propagators of a single Feynman diagram. Two detailed examples are shown to illustrate the procedure, one a five-point amplitude, and the other a nine-point amplitude. Our procedure does not generate intermediate spurious poles, in contrast to what is common by choosing Möbius constants at 0, 1, and ∞ .

  8. Λ scattering equations

    NASA Astrophysics Data System (ADS)

    Gomez, Humberto

    2016-06-01

    The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.

  9. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  10. Spinor description of D = 5 massless low-spin gauge fields

    NASA Astrophysics Data System (ADS)

    Uvarov, D. V.

    2016-07-01

    Spinor description for the curvatures of D = 5 Yang-Mills, Rarita-Schwinger and gravitational fields is elaborated. Restrictions imposed on the curvature spinors by the dynamical equations and Bianchi identities are analyzed. In the absence of sources symmetric curvature spinors with 2s indices obey first-order equations that in the linearized limit reduce to Dirac-type equations for massless free fields. These equations allow for a higher-spin generalization similarly to 4d case. Their solution in the form of the integral over Lorentz-harmonic variables parametrizing coset manifold {SO}(1,4)/({SO}(1,1)× {ISO}(3)) isomorphic to the three-sphere is considered. Superparticle model that contains such Lorentz harmonics as dynamical variables, as well as harmonics parametrizing the two-sphere {SU}(2)/U(1) is proposed. The states in its spectrum are given by the functions on S 3 that upon integrating over the Lorentz harmonics reproduce on-shell symmetric curvature spinors for various supermultiplets of D = 5 space-time supersymmetry.

  11. Delay compensation in integrated communication and control systems. I - Conceptual development and analysis

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.

  12. A General Reversible Hereditary Constitutive Model. Part 1; Theoretical Developments

    NASA Technical Reports Server (NTRS)

    Saleeb, A. F.; Arnold, S. M.

    1997-01-01

    Using an internal-variable formalism as a starting point, we describe the viscoelastic extension of a previously-developed viscoplasticity formulation of the complete potential structure type. It is mainly motivated by experimental evidence for the presence of rate/time effects in the so-called quasilinear, reversible, material response range. Several possible generalizations are described, in the general format of hereditary-integral representations for non-equilibrium, stress-type, state variables, both for isotropic as well as anisotropic materials. In particular, thorough discussions are given on the important issues of thermodynamic admissibility requirements for such general descriptions, resulting in a set of explicit mathematical constraints on the associated kernel (relaxation and creep compliance) functions. In addition, a number of explicit, integrated forms are derived, under stress and strain control to facilitate the parametric and qualitative response characteristic studies reported here, as well as to help identify critical factors in the actual experimental characterizations from test data that will be reported in Part II.

  13. Sampling design for an integrated socioeconomic and ecological survey by using satellite remote sensing and ordination

    PubMed Central

    Binford, Michael W.; Lee, Tae Jeong; Townsend, Robert M.

    2004-01-01

    Environmental variability is an important risk factor in rural agricultural communities. Testing models requires empirical sampling that generates data that are representative in both economic and ecological domains. Detrended correspondence analysis of satellite remote sensing data were used to design an effective low-cost sampling protocol for a field study to create an integrated socioeconomic and ecological database when no prior information on ecology of the survey area existed. We stratified the sample for the selection of tambons from various preselected provinces in Thailand based on factor analysis of spectral land-cover classes derived from satellite data. We conducted the survey for the sampled villages in the chosen tambons. The resulting data capture interesting variations in soil productivity and in the timing of good and bad years, which a purely random sample would likely have missed. Thus, this database will allow tests of hypotheses concerning the effect of credit on productivity, the sharing of idiosyncratic risks, and the economic influence of environmental variability. PMID:15254298

  14. Mathematical Methods for Physics and Engineering Third Edition Paperback Set

    NASA Astrophysics Data System (ADS)

    Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.

    2006-06-01

    Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.

  15. The Riemann-Lanczos equations in general relativity and their integrability

    NASA Astrophysics Data System (ADS)

    Dolan, P.; Gerber, A.

    2008-06-01

    The aim of this paper is to examine the Riemann-Lanczos equations and how they can be made integrable. They consist of a system of linear first-order partial differential equations that arise in general relativity, whereby the Riemann curvature tensor is generated by an unknown third-order tensor potential field called the Lanczos tensor. Our approach is based on the theory of jet bundles, where all field variables and all their partial derivatives of all relevant orders are treated as independent variables alongside the local manifold coordinates (xa) on the given space-time manifold M. This approach is adopted in (a) Cartan's method of exterior differential systems, (b) Vessiot's dual method using vector field systems, and (c) the Janet-Riquier theory of systems of partial differential equations. All three methods allow for the most general situations under which integrability conditions can be found. They give equivalent results, namely, that involutivity is always achieved at all generic points of the jet manifold M after a finite number of prolongations. Two alternative methods that appear in the general relativity literature to find integrability conditions for the Riemann-Lanczos equations generate new partial differential equations for the Lanczos potential that introduce a source term, which is nonlinear in the components of the Riemann tensor. We show that such sources do not occur when either of method (a), (b), or (c) are used.

  16. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  17. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  18. Transition of NOAA's GPS-Met Data Acquisition and Processing System to the Commercial Sector

    NASA Astrophysics Data System (ADS)

    Jackson, M. E.; Holub, K.; Callahan, W.; Blatt, S.

    2014-12-01

    In April of 2014, NOAA/OAR/ESRL Global Systems Division (GSD) and Trimble, in collaboration with Earth Networks, Inc. (ENI) signed a Cooperative Research and Development Agreement (CRADA) to transfer the existing NOAA GPS-Met Data Acquisition and Processing System (GPS-Met DAPS) technology to a commercial Trimble/ENI partnership. NOAA's GPS-Met DAPS is currently operated in a pseudo-operational mode but has proven highly reliable and running at over 95% uptime. The DAPS uses the GAMIT software to ingest dual frequency carrier phase GPS/GNSS observations and ancillary information such as real-time satellite orbits to estimate the zenith-scaled tropospheric (ZTD) signal delays and, where surface MET data are available, retrieve integrated precipitable water vapor (PWV). The NOAA data and products are made available to end users in near real-time. The Trimble/ENI partnership will use the Trimble Pivot™ software with the Atmosphere App to calculate zenith tropospheric (ZTD), tropospheric slant delay, and integrated precipitable water vapor (PWV). Evaluation of the Trimble software is underway starting with a comparison of ZTD and PWV values determined from GPS stations located near NOAA Radiosonde Observation (Upper-Air Observation) launch sites. A success metric was established that requires Trimble's PWV estimates to match ESRL/GSD's to within 1.5 mm 95% of the time, which corresponds to a ZTD uncertainty of less than 10 mm 95% of the time. Initial results indicate that Trimble/ENI data meet and exceed the ZTD metric, but for some stations PWV estimates are out of specification. These discrepancies are primarily due to how offsets between MET and GPS stations are handled and are easily resolved. Additional test networks are proposed that include low terrain/high moisture variability stations, high terrain/low moisture variability stations, as well as high terrain/high moisture variability stations. We will present results from further testing along with a timeline for the transition of the GPS-Met DAPS to an operational commercial service.

  19. An analysis of the daily precipitation variability in the Himalayan orogen using a statistical parameterisation and its potential in driving landscape evolution models with stochastic climatic forcing

    NASA Astrophysics Data System (ADS)

    Deal, Eric; Braun, Jean

    2015-04-01

    A current challenge in landscape evolution modelling is to integrate realistic precipitation patterns and behaviour into longterm fluvial erosion models. The effect of precipitation on fluvial erosion can be subtle as well as nonlinear, implying that changes in climate (e.g. precipitation magnitude or storminess) may have unexpected outcomes in terms of erosion rates. For example Tucker and Bras (2000) show theoretically that changes in the variability of precipitation (storminess) alone can influence erosion rate across a landscape. To complicate the situation further, topography, ultimately driven by tectonic uplift but shaped by erosion, has a major influence on the distribution and style of precipitation. Therefore, in order to untangle the coupling between climate, erosion and tectonics in an actively uplifting orogen where fluvial erosion is dominant it is important to understand how the 'rain dial' used in a landscape evolution model (LEM) corresponds to real precipitation patterns. One issue with the parameterisation of rainfall for use in an LEM is the difference between the timescales for precipitation (≤ 1 year) and landscape evolution (> 103 years). As a result, precipitation patterns must be upscaled before being integrated into a model. The relevant question then becomes: What is the most appropriate measure of precipitation on a millennial timescale? Previous work (Tucker and Bras, 2000; Lague, 2005) has shown that precipitation can be properly upscaled by taking into account its variable nature, along with its average magnitude. This captures the relative size and frequency of extreme events, ensuring a more accurate characterisation of the integrated effects of precipitation on erosion over long periods of time. In light of this work, we present a statistical parameterisation that accurately models the mean and daily variability of ground based (APHRODITE) and remotely sensed (TRMM) precipitation data in the Himalayan orogen with only a few parameters. We also demonstrate over what spatial and temporal scales this parameterisation applies and is stable. Applying the parameterisation over the Himalayan orogen reveals large-scale strike-perpendicular gradients in precipitation variability in addition to the long observed strike-perpendicular gradient in precipitation magnitude. This observation, combined with the theoretical work mentioned above, suggests that variability is an integral part of the interaction between climate and erosion. References Bras, R. L., & Tucker, G. E. (2000). A stochastic approach to modeling the role of rainfall variability in drainage basin evolution. Water Resources Research, 36(7), 1953-1964. doi:10.1029/2000WR900065 Lague, D. (2005). Discharge, discharge variability, and the bedrock channel profile. Journal of Geophysical Research, 110(F4), F04006. doi:10.1029/2004JF000259

  20. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finn, John M., E-mail: finn@lanl.gov

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less

  1. Student Solution Manual for Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.

  2. Essential Mathematical Methods for the Physical Sciences

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2011-02-01

    1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.

  3. Impact of temporal upscaling and chemical transport model horizontal resolution on reducing ozone exposure misclassification

    NASA Astrophysics Data System (ADS)

    Xu, Yadong; Serre, Marc L.; Reyes, Jeanette M.; Vizuete, William

    2017-10-01

    We have developed a Bayesian Maximum Entropy (BME) framework that integrates observations from a surface monitoring network and predictions from a Chemical Transport Model (CTM) to create improved exposure estimates that can be resolved into any spatial and temporal resolution. The flexibility of the framework allows for input of data in any choice of time scales and CTM predictions of any spatial resolution with varying associated degrees of estimation error and cost in terms of implementation and computation. This study quantifies the impact on exposure estimation error due to these choices by first comparing estimations errors when BME relied on ozone concentration data either as an hourly average, the daily maximum 8-h average (DM8A), or the daily 24-h average (D24A). Our analysis found that the use of DM8A and D24A data, although less computationally intensive, reduced estimation error more when compared to the use of hourly data. This was primarily due to the poorer CTM model performance in the hourly average predicted ozone. Our second analysis compared spatial variability and estimation errors when BME relied on CTM predictions with a grid cell resolution of 12 × 12 km2 versus a coarser resolution of 36 × 36 km2. Our analysis found that integrating the finer grid resolution CTM predictions not only reduced estimation error, but also increased the spatial variability in daily ozone estimates by 5 times. This improvement was due to the improved spatial gradients and model performance found in the finer resolved CTM simulation. The integration of observational and model predictions that is permitted in a BME framework continues to be a powerful approach for improving exposure estimates of ambient air pollution. The results of this analysis demonstrate the importance of also understanding model performance variability and its implications on exposure error.

  4. Tensile Properties and Integrity of Clean Room and Low-Modulus Disposable Nitrile Gloves: A Comparison of Two Dissimilar Glove Types

    PubMed Central

    Phalen, Robert N.; Wong, Weng kee

    2012-01-01

    Background: The selection of disposable nitrile exam gloves is complicated by (i) the availability of several types or formulations, (ii) product variability, and (iii) an inability of common quality control tests to detect small holes in the fingers. Differences in polymer formulation (e.g. filler and plasticizer/oil content) and tensile properties are expected to account for much of the observed variability in performance. Objectives: This study evaluated the tensile properties and integrity (leak failure rates) of two glove choices assumed to contain different amounts of plasticizers/oils. The primary aims were to determine if the tensile properties and integrity differed and if associations existed among these factors. Additional physical and chemical properties were evaluated. Methods: Six clean room and five low-modulus products were evaluated using the American Society for Testing and Materials Method D412 and a modified water-leak test to detect holes capable of passing a virus or chemical agent. Results: Significant differences in the leak failure rates and tensile properties existed between the two glove types (P ≤ 0.05). The clean room gloves were about three times more likely to have leak failures (chi-square; P = 0.001). No correlation was observed between leak failures and tensile properties. Solvent extract, an indication of added plasticizer/oil, was not associated with leak failures. However, gloves with a maximum modulus <4 MPa or area density (AD) <11 g cm−2 were about four times less likely to leak. Conclusions: On average, the low-modulus gloves were a better choice for protection against aqueous chemical or biological penetration. The observed variability between glove products indicated that glove selection cannot rely solely on glove type or manufacturer labeling. Measures of modulus and AD may aid in the selection process, in contrast with common measures of tensile strength and elongation at break. PMID:22201179

  5. Tensile properties and integrity of clean room and low-modulus disposable nitrile gloves: a comparison of two dissimilar glove types.

    PubMed

    Phalen, Robert N; Wong, Weng Kee

    2012-05-01

    The selection of disposable nitrile exam gloves is complicated by (i) the availability of several types or formulations, (ii) product variability, and (iii) an inability of common quality control tests to detect small holes in the fingers. Differences in polymer formulation (e.g. filler and plasticizer/oil content) and tensile properties are expected to account for much of the observed variability in performance. This study evaluated the tensile properties and integrity (leak failure rates) of two glove choices assumed to contain different amounts of plasticizers/oils. The primary aims were to determine if the tensile properties and integrity differed and if associations existed among these factors. Additional physical and chemical properties were evaluated. Six clean room and five low-modulus products were evaluated using the American Society for Testing and Materials Method D412 and a modified water-leak test to detect holes capable of passing a virus or chemical agent. Significant differences in the leak failure rates and tensile properties existed between the two glove types (P ≤ 0.05). The clean room gloves were about three times more likely to have leak failures (chi-square; P = 0.001). No correlation was observed between leak failures and tensile properties. Solvent extract, an indication of added plasticizer/oil, was not associated with leak failures. However, gloves with a maximum modulus <4 MPa or area density (AD) <11 g cm(-2) were about four times less likely to leak. On average, the low-modulus gloves were a better choice for protection against aqueous chemical or biological penetration. The observed variability between glove products indicated that glove selection cannot rely solely on glove type or manufacturer labeling. Measures of modulus and AD may aid in the selection process, in contrast with common measures of tensile strength and elongation at break.

  6. Bridging the gap between PAT concepts and implementation: An integrated software platform for fermentation.

    PubMed

    Chopda, Viki R; Gomes, James; Rathore, Anurag S

    2016-01-01

    Bioreactor control significantly impacts both the amount and quality of the product being manufactured. The complexity of the control strategy that is implemented increases with reactor size, which may vary from thousands to tens of thousands of litres in commercial manufacturing. The Process Analytical Technology (PAT) initiative has highlighted the need for having robust monitoring tools and effective control schemes that are capable of taking real time information about the critical quality attributes (CQA) and the critical process parameters (CPP) and executing immediate response as soon as a deviation occurs. However, the limited flexibility that present commercial software packages offer creates a hurdle. Visual programming environments have gradually emerged as potential alternatives to the available text based languages. This paper showcases development of an integrated programme using a visual programming environment for a Sartorius BIOSTAT® B Plus 5L bioreactor through which various peripheral devices are interfaced. The proposed programme facilitates real-time access to data and allows for execution of control actions to follow the desired trajectory. Major benefits of such integrated software system include: (i) improved real time monitoring and control; (ii) reduced variability; (iii) improved performance; (iv) reduced operator-training time; (v) enhanced knowledge management; and (vi) easier PAT implementation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Uncertainty evaluation of a regional real-time system for rain-induced landslides

    NASA Astrophysics Data System (ADS)

    Kirschbaum, Dalia; Stanley, Thomas; Yatheendradas, Soni

    2015-04-01

    A new prototype regional model and evaluation framework has been developed over Central America and the Caribbean region using satellite-based information including precipitation estimates, modeled soil moisture, topography, soils, as well as regionally available datasets such as road networks and distance to fault zones. The algorithm framework incorporates three static variables: a susceptibility map; a 24-hr rainfall triggering threshold; and an antecedent soil moisture variable threshold, which have been calibrated using historic landslide events. The thresholds are regionally heterogeneous and are based on the percentile distribution of the rainfall or antecedent moisture time series. A simple decision tree algorithm framework integrates all three variables with the rainfall and soil moisture time series and generates a landslide nowcast in real-time based on the previous 24 hours over this region. This system has been evaluated using several available landslide inventories over the Central America and Caribbean region. Spatiotemporal uncertainty and evaluation metrics of the model are presented here based on available landslides reports. This work also presents a probabilistic representation of potential landslide activity over the region which can be used to further refine and improve the real-time landslide hazard assessment system as well as better identify and characterize the uncertainties inherent in this type of regional approach. The landslide algorithm provides a flexible framework to improve hazard estimation and reduce uncertainty at any spatial and temporal scale.

  8. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind.

    PubMed

    Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.

  9. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind

    PubMed Central

    Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888

  10. Development of a Scalable Testbed for Mobile Olfaction Verification.

    PubMed

    Zakaria, Syed Muhammad Mamduh Syed; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Yeon, Ahmad Shakaff Ali; Md Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah

    2015-12-09

    The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment.

  11. Development of a Scalable Testbed for Mobile Olfaction Verification

    PubMed Central

    Syed Zakaria, Syed Muhammad Mamduh; Visvanathan, Retnam; Kamarudin, Kamarulzaman; Ali Yeon, Ahmad Shakaff; Md. Shakaff, Ali Yeon; Zakaria, Ammar; Kamarudin, Latifah Munirah

    2015-01-01

    The lack of information on ground truth gas dispersion and experiment verification information has impeded the development of mobile olfaction systems, especially for real-world conditions. In this paper, an integrated testbed for mobile gas sensing experiments is presented. The integrated 3 m × 6 m testbed was built to provide real-time ground truth information for mobile olfaction system development. The testbed consists of a 72-gas-sensor array, namely Large Gas Sensor Array (LGSA), a localization system based on cameras and a wireless communication backbone for robot communication and integration into the testbed system. Furthermore, the data collected from the testbed may be streamed into a simulation environment to expedite development. Calibration results using ethanol have shown that using a large number of gas sensor in the LGSA is feasible and can produce coherent signals when exposed to the same concentrations. The results have shown that the testbed was able to capture the time varying characteristics and the variability of gas plume in a 2 h experiment thus providing time dependent ground truth concentration maps. The authors have demonstrated the ability of the mobile olfaction testbed to monitor, verify and thus, provide insight to gas distribution mapping experiment. PMID:26690175

  12. Sensitivity of stream water age to climatic variability and land use change: implications for water quality

    NASA Astrophysics Data System (ADS)

    Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe

    2016-04-01

    Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.

  13. Spring onset variations and long-term trends from new hemispheric-scale products and remote sensing

    NASA Astrophysics Data System (ADS)

    Dye, D. G.; Li, X.; Ault, T.; Zurita-Milla, R.; Schwartz, M. D.

    2015-12-01

    Spring onset is commonly characterized by plant phenophase changes among a variety of biophysical transitions and has important implications for natural and man-managed ecosystems. Here, we present a new integrated analysis of variability in gridded Northern Hemisphere spring onset metrics. We developed a set of hemispheric temperature-based spring indices spanning 1920-2013. As these were derived solely from meteorological data, they are used as a benchmark for isolating the climate system's role in modulating spring "green up" estimated from the annual cycle of normalized difference vegetation index (NDVI). Spatial patterns of interannual variations, teleconnections, and long-term trends were also analyzed in all metrics. At mid-to-high latitudes, all indices exhibit larger variability at interannual to decadal time scales than at spatial scales of a few kilometers. Trends of spring onset vary across space and time. However, compared to long-term trend, interannual to decadal variability generally accounts for a larger portion of the total variance in spring onset timing. Therefore, spring onset trends identified from short existing records may be aliased by decadal climate variations due to their limited temporal depth, even when these records span the entire satellite era. Based on our findings, we also demonstrated that our indices have skill in representing ecosystem-level spring phenology and may have important implications in understanding relationships between phenology, atmosphere dynamics and climate variability.

  14. Model for the techno-economic analysis of common work of wind power and CCGT power plant to offer constant level of power in the electricity market

    NASA Astrophysics Data System (ADS)

    Tomsic, Z.; Rajsl, I.; Filipovic, M.

    2017-11-01

    Wind power varies over time, mainly under the influence of meteorological fluctuations. The variations occur on all time scales. Understanding these variations and their predictability is of key importance for the integration and optimal utilization of wind in the power system. There are two major attributes of variable generation that notably impact the participation on power exchanges: Variability (the output of variable generation changes and resulting in fluctuations in the plant output on all time scales) and Uncertainty (the magnitude and timing of variable generation output is less predictable, wind power output has low levels of predictability). Because of these variability and uncertainty wind plants cannot participate to electricity market, especially to power exchanges. For this purpose, the paper presents techno-economic analysis of work of wind plants together with combined cycle gas turbine (CCGT) plant as support for offering continues power to electricity market. A model of wind farms and CCGT plant was developed in program PLEXOS based on real hourly input data and all characteristics of CCGT with especial analysis of techno-economic characteristics of different types of starts and stops of the plant. The Model analyzes the followings: costs of different start-stop characteristics (hot, warm, cold start-ups and shutdowns) and part load performance of CCGT. Besides the costs, the technical restrictions were considered such as start-up time depending on outage duration, minimum operation time, and minimum load or peaking capability. For calculation purposes, the following parameters are necessary to know in order to be able to economically evaluate changes in the start-up process: ramp up and down rate, time of start time reduction, fuel mass flow during start, electricity production during start, variable cost of start-up process, cost and charges for life time consumption for each start and start type, remuneration during start up time regarding expected or unexpected starts, the cost and revenues for balancing energy (important when participating in electricity market), and the cost or revenues for CO2-certificates. Main motivation for this analysis is to investigate possibilities to participate on power exchanges by offering continues guarantied power from wind plants by backing-up them with CCGT power plant.

  15. Hydrologic index development and application to selected Coastwide Reference Monitoring System sites and Coastal Wetlands Planning, Protection and Restoration Act projects

    USGS Publications Warehouse

    Snedden, Gregg A.; Swenson, Erick M.

    2012-01-01

    Hourly time-series salinity and water-level data are collected at all stations within the Coastwide Reference Monitoring System (CRMS) network across coastal Louisiana. These data, in addition to vegetation and soils data collected as part of CRMS, are used to develop a suite of metrics and indices to assess wetland condition in coastal Louisiana. This document addresses the primary objectives of the CRMS hydrologic analytical team, which were to (1) adopt standard time-series analytical techniques that could effectively assess spatial and temporal variability in hydrologic characteristics across the Louisiana coastal zone on site, project, basin, and coastwide scales and (2) develop and apply an index based on wetland hydrology that can describe the suitability of local hydrology in the context of maximizing the productivity of wetland plant communities. Approaches to quantifying tidal variability (least squares harmonic analysis) and partitioning variability of time-series data to various time scales (spectral analysis) are presented. The relation between marsh elevation and the tidal frame of a given hydrograph is described. A hydrologic index that integrates water-level and salinity data, which are collected hourly, with vegetation data that are collected annually is developed. To demonstrate its utility, the hydrologic index is applied to 173 CRMS sites across the coast, and variability in index scores across marsh vegetation types (fresh, intermediate, brackish, and saline) is assessed. The index is also applied to 11 sites located in three Coastal Wetlands Planning, Protection and Restoration Act projects, and the ability of the index to convey temporal hydrologic variability in response to climatic stressors and restoration measures, as well as the effect that this community may have on wetland plant productivity, is illustrated.

  16. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    NASA Astrophysics Data System (ADS)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.

  17. Geo-Semantic Framework for Integrating Long-Tail Data and Model Resources for Advancing Earth System Science

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.

    2014-12-01

    Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation and preservation of long-tail data during its life-cycle; (ii) BrownDog, which enhances the machine interpretability of large unstructured and uncurated data; and (iii) CSDMS (Community Surface Dynamics Modeling System), which "componentizes" models by providing plug-and-play environment for models integration.

  18. Variational Data Assimilation for the Global Ocean

    DTIC Science & Technology

    2013-01-01

    ocean includes the Geoid (a fixed gravity equipotential surface ) as well as the MDT, which is not known accurately enough relative to the centimeter...scales, including processes that control the surface mixed layer, the formation of ocean eddies, meandering ocean J.A. Cummings (E3) nography Division...variables. Examples of this in the ocean are integral quantities, such as acous^B travel time and altimeter measures of sea surface height, and direct

  19. Transient Finite Element Computations on a Variable Transputer System

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick J.; Lapczyk, Ireneusz

    1993-01-01

    A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.

  20. A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation

    USGS Publications Warehouse

    Smith, Peter E.

    2006-01-01

    A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.

  1. Variability of Bed Load Transport During Six Summers of Continuous Measurements in Two Austrian Mountain Streams (Fischbach and Ruetz)

    NASA Astrophysics Data System (ADS)

    Rickenmann, Dieter

    2018-01-01

    Previous measurements of bed load transport in gravel bed streams revealed a large temporal and spatial variability of bed load transport rates. Using an impact plate geophone system, continuous bed load transport measurements were made during 6 years in two mountain streams in Austria. The two streams have a snow-melt and glacier-melt dominated hydrologic regime resulting in frequent transport activity during the summer half year. Periods of days to weeks were identified which are associated with approximately constant Shields values that indicate quasi-stable bed conditions. Between these stable periods, the position of the bed load transport function varied while its steepness remained approximately constant. For integration time scales of several hours to 1 day, the fluctuations in bed load transport decreased and the correlation between bed load transport and water discharge increased. For integration times of about 70-100 days, bed load transport is determined by discharge or shear stress to within a factor of about 2, relative to the 6 year mean level. Bed load texture increased with increasing mean flow strength and mean transport intensity. Weak and predominantly clockwise daily hysteresis of bed load transport was found for the first half of the summer period.

  2. A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series

    NASA Astrophysics Data System (ADS)

    Rovira, F.; Palau, J. L.; Millán, M.

    2009-09-01

    Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).

  3. Analytical properties of time-of-flight PET data.

    PubMed

    Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M

    2008-06-07

    We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.

  4. Analytical properties of time-of-flight PET data

    NASA Astrophysics Data System (ADS)

    Cho, Sanghee; Ahn, Sangtae; Li, Quanzheng; Leahy, Richard M.

    2008-06-01

    We investigate the analytical properties of time-of-flight (TOF) positron emission tomography (PET) sinograms, where the data are modeled as line integrals weighted by a spatially invariant TOF kernel. First, we investigate the Fourier transform properties of 2D TOF data and extend the 'bow-tie' property of the 2D Radon transform to the time-of-flight case. Second, we describe a new exact Fourier rebinning method, TOF-FOREX, based on the Fourier transform in the time-of-flight variable. We then combine TOF-FOREX rebinning with a direct extension of the projection slice theorem to TOF data, to perform fast 3D TOF PET image reconstruction. Finally, we illustrate these properties using simulated data.

  5. Hydro power flexibility for power systems with variable renewable energy sources: an IEA Task 25 collaboration: Hydro power flexibility for power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele

    2016-06-20

    Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less

  6. Aspect-Oriented Model-Driven Software Product Line Engineering

    NASA Astrophysics Data System (ADS)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  7. Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition

    NASA Astrophysics Data System (ADS)

    Riley, K. F.; Hobson, M. P.

    2006-03-01

    Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.

  8. Shallow-water sloshing in a moving vessel with variable cross-section and wetting-drying using an extension of George's well-balanced finite volume solver

    NASA Astrophysics Data System (ADS)

    Alemi Ardakani, Hamid; Bridges, Thomas J.; Turner, Matthew R.

    2016-06-01

    A class of augmented approximate Riemann solvers due to George (2008) [12] is extended to solve the shallow-water equations in a moving vessel with variable bottom topography and variable cross-section with wetting and drying. A class of Roe-type upwind solvers for the system of balance laws is derived which respects the steady-state solutions. The numerical solutions of the new adapted augmented f-wave solvers are validated against the Roe-type solvers. The theory is extended to solve the shallow-water flows in moving vessels with arbitrary cross-section with influx-efflux boundary conditions motivated by the shallow-water sloshing in the ocean wave energy converter (WEC) proposed by Offshore Wave Energy Ltd. (OWEL) [1]. A fractional step approach is used to handle the time-dependent forcing functions. The numerical solutions are compared to an extended new Roe-type solver for the system of balance laws with a time-dependent source function. The shallow-water sloshing finite volume solver can be coupled to a Runge-Kutta integrator for the vessel motion.

  9. The role of C3 and C4 grasses to interannual variability in remotely sensed ecosystem performance over the US Great Plains

    USGS Publications Warehouse

    Ricotta, C.; Reed, B.C.; Tieszen, L.T.

    2003-01-01

    Time integrated normalized difference vegetation index (??NDVI) derived from National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) multi-temporal imagery over a 10-year period (1989-1998) was used as a surrogate for primary production to investigate the impact of interannual climate variability on grassland performance for central and northern US Great Plains. First, the contribution of C3 and C4 species abundance to the major grassland ecosystems of the US Great Plains is described. Next, the relation between mean ??NDVI and the ??NDVI coefficient of variation (CV ??NDVI) used as a proxy for interranual climate variability is analysed. Results suggest that the differences in the long-term climate control over ecosystem performance approximately coincide with changes between C3- and C4-dominant grassland classes. Variation in remotely sensed net primary production over time is higher for the southern and western plains grasslands (primary C4 grasslands), whereas the C3-dominated classes in the northern and eastern portion of the US Great Plains, generally show lower CV ??NDVI values.

  10. Integrating spatial and temporal variability into the analysis of fish food web linkages in Tijuana Estuary.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    West, Janelle M.; Williams, Greg D.; Madon, Sharook P.

    2003-05-14

    Our understanding of fish feeding interactions at Tijuana Estuary was improved by incorporating estimates of spatial and temporal variability into diet analyses. We examined the stomach contents of 7 dominant species (n=579 total fish) collected between 1994 and 1999. General feeding patterns pooled over time produced a basic food web consisting of 3 major trophic levels: (1) primary consumers (Atherinops affinis, Mugil cephalus) that ingested substantial amounts of plant material and detritus; (2) benthic carnivores (Clevelandia ios, Hypsopsetta guttulata, Gillichthys mirabilis, and Fundulus parvipinnis) that ingested high numbers of calanoid copepods and exotic amphipods (Grandidierella japonica); and (3) piscivores (Paralichthysmore » californicus and Leptocottus armatus) that often preyed on smaller gobiids. Similarity-based groupings of individual species' diets were identified using nonmetric multidimensional scaling to characterize their variability within and between species, and in s pace and time. This allowed us to identify major shifts and recognize events (i.e., modified prey abundance during 1997-98 El Nino floods) that likely caused these shifts.« less

  11. Combined effects of expectations and visual uncertainty upon detection and identification of a target in the fog.

    PubMed

    Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial

    2015-09-01

    Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.

  12. Psychotherapy integration under scrutiny: investigating the impact of integrating emotion-focused components into a CBT-based approach: a study protocol of a randomized controlled trial.

    PubMed

    Babl, Anna; Grosse Holtforth, Martin; Heer, Sara; Lin, Mu; Stähli, Annabarbara; Holstein, Dominique; Belz, Martina; Egenolf, Yvonne; Frischknecht, Eveline; Ramseyer, Fabian; Regli, Daniel; Schmied, Emma; Flückiger, Christoph; Brodbeck, Jeannette; Berger, Thomas; Caspar, Franz

    2016-11-24

    This currently recruiting randomized controlled trial investigates the effects of integrating components of Emotion-Focused Therapy (EFT) into Psychological Therapy (PT), an integrative form of cognitive-behavioral therapy in a manner that is directly mirroring common integrative practice in the sense of assimilative integration. Aims of the study are to understand how both, an existing therapy approach as well as the elements to be integrated, are affected by the integration and to clarify the role of emotional processing as a mediator of therapy outcome. A total of 130 adults with a diagnosed unipolar depressive, anxiety or adjustment disorder (seeking treatment at a psychotherapy outpatient clinic) are randomized to either treatment as usual (PT) with integrated emotion-focused components (TAU + EFT) or PT (TAU). Primary outcome variables are psychopathology and symptom severity at the end of therapy and at follow up; secondary outcome variables are interpersonal problems, psychological wellbeing, quality of life, attainment of individual therapy goals, and emotional competency. Furthermore, process variables such as the quality of the therapeutic relationship are studied as well as aptitude-treatment interactions. Variables are assessed at baseline, after 8 and 16 sessions, at the end of therapy, after 25 ± 3 sessions, and at 6, 12 and 36 month follow-up. Underlying mechanisms of change are investigated. Statistical analyses will be conducted using the appropriate multilevel approaches, mainly two-level regression and growth analysis. The results of this study will indicate whether the integration of emotion-focused elements into treatment as usual increases the effectiveness of Psychological Therapy. If advantages are found, which may be limited to particular variables or subgroups of patients, recommendations for a systematic integration, and caveats if also disadvantages are detected, can be formulated. On a more abstract level, a cognitive behavioral (represented by PT) and humanistic/experiential (represented by EFT) approach will be integrated. It must be emphasized that mimicking common practice in the development and continued education of psychotherapists, EFT is not integrated as a whole, but only elements of EFT that are considered particularly important, and can be trained in an 8-day training plus supervision of therapies. ClinicalTrials.gov, NCT02822443 , 22 June 2016, retrospectively registered.

  13. Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers.

    PubMed

    Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho

    2007-06-20

    A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.

  14. Knowledge modeling tool for evidence-based design.

    PubMed

    Durmisevic, Sanja; Ciftcioglu, Ozer

    2010-01-01

    The aim of this study is to take evidence-based design (EBD) to the next level by activating available knowledge, integrating new knowledge, and combining them for more efficient use by the planning and design community. This article outlines a framework for a performance-based measurement tool that can provide the necessary decision support during the design or evaluation of a healthcare environment by estimating the overall design performance of multiple variables. New knowledge in EBD adds continuously to complexity (the "information explosion"), and it becomes impossible to consider all aspects (design features) at the same time, much less their impact on final building performance. How can existing knowledge and the information explosion in healthcare-specifically the domain of EBD-be rendered manageable? Is it feasible to create a computational model that considers many design features and deals with them in an integrated way, rather than one at a time? The found evidence is structured and readied for computation through a "fuzzification" process. The weights are calculated using an analytical hierarchy process. Actual knowledge modeling is accomplished through a fuzzy neural tree structure. The impact of all inputs on the outcome-in this case, patient recovery-is calculated using sensitivity analysis. Finally, the added value of the model is discussed using a hypothetical case study of a patient room. The proposed model can deal with the complexities of various aspects and the relationships among variables in a coordinated way, allowing existing and new pieces of evidence to be integrated in a knowledge tree structure that facilitates understanding of the effects of various design interventions on overall design performance.

  15. Publications | Energy Systems Integration Facility | NREL

    Science.gov Websites

    100% Renewable Grid: Operating Electric Power Systems with Extremely High Levels of Variable Renewable timeline. Feeder Voltage Regulation with High-Penetration PV Using Advanced Inverters and a Distribution Integrating High Levels of Variable Renewable Energy into Electric Power Systems, Journal of Modern Power

  16. Higher-order time integration of Coulomb collisions in a plasma using Langevin equations

    DOE PAGES

    Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...

    2013-02-08

    The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less

  17. Elimination of secular terms from the differential equations for the elements of perturbed two-body motion

    NASA Technical Reports Server (NTRS)

    Bond, Victor R.; Fraietta, Michael F.

    1991-01-01

    In 1961, Sperling linearized and regularized the differential equations of motion of the two-body problem by changing the independent variable from time to fictitious time by Sundman's transformation (r = dt/ds) and by embedding the two-body energy integral and the Laplace vector. In 1968, Burdet developed a perturbation theory which was uniformly valid for all types of orbits using a variation of parameters approach on the elements which appeared in Sperling's equations for the two-body solution. In 1973, Bond and Hanssen improved Burdet's set of differential equations by embedding the total energy (which is a constant when the potential function is explicitly dependent upon time.) The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In the process, another element which is proportional to a component of the angular momentum was introduced. Recently trajectories computed during numerical studies of atmospheric entry from circular orbits and low thrust beginning in near-circular orbits exhibited numerical instability when solved by the method of Bond and Gottlieb (1989) for long time intervals. It was found that this instability was due to secular terms which appear on the righthand sides of the differential equations of some of the elements. In this paper, this instability is removed by the introduction of another vector integral called the delta integral (which replaces the Laplace Vector) and another scalar integral which removes the secular terms. The introduction of these integrals requires a new derivation of the differential equations for most of the elements. For this rederivation, the Lagrange method of variation of parameters is used, making the development more concise. Numerical examples of this improvement are presented.

  18. The susceptibility of large river basins to orogenic and climatic drivers

    NASA Astrophysics Data System (ADS)

    Haedke, Hanna; Wittmann, Hella; von Blanckenburg, Friedhelm

    2017-04-01

    Large rivers are known to buffer pulses in sediment production driven by changes in climate as sediment is transported through lowlands. Our new dataset of in situ cosmogenic nuclide concentration and chemical composition of 62 sandy bedload samples from the world largest rivers integrates over 25% of Earth's terrestrial surface, distributed over a variety of climatic zones across all continents, and represents the millennial-scale denudation rate of the sediment's source area. We can show that these denudation rates do not respond to climatic forcing, but faithfully record orogenic forcing, when analyzed with respective variables representing orogeny (strain rate, relief, bouguer anomaly, free-air anomaly), and climate (runoff, temperature, precipitation) and basin properties (floodplain response time, drainage area). In contrast to this orogenic forcing of denudation rates, elemental bedload chemistry from the fine-grained portion of the same samples correlates with climate-related variables (precipitation, runoff) and floodplain response times. It is also well-known from previous compilations of river-gauged sediment loads that the short-term basin-integrated sediment export is also climatically controlled. The chemical composition of detrital sediment shows a climate control that can originate in the rivers source area, but this signal is likely overprinted during transfer through the lowlands because we also find correlation with floodplain response times. At the same time, cosmogenic nuclides robustly preserve the orogenic forcing of the source area denudation signal through of the floodplain buffer. Conversely, previous global compilations of cosmogenic nuclides in small river basins show the preservation of climate drivers in their analysis, but these are buffered in large lowland rivers. Hence, we can confirm the assumption that cosmogenic nuclides in large rivers are poorly susceptible to climate changes, but are at the same time highly suited to detect changes in orogenic forcing in their paleo sedimentary records.

  19. A multiscale, hierarchical model of pulse dynamics in arid-land ecosystems

    USGS Publications Warehouse

    Collins, Scott L.; Belnap, Jayne; Grimm, N. B.; Rudgers, J. A.; Dahm, Clifford N.; D'Odorico, P.; Litvak, M.; Natvig, D. O.; Peters, Douglas C.; Pockman, W. T.; Sinsabaugh, R. L.; Wolf, B. O.

    2014-01-01

    Ecological processes in arid lands are often described by the pulse-reserve paradigm, in which rain events drive biological activity until moisture is depleted, leaving a reserve. This paradigm is frequently applied to processes stimulated by one or a few precipitation events within a growing season. Here we expand the original framework in time and space and include other pulses that interact with rainfall. This new hierarchical pulse-dynamics framework integrates space and time through pulse-driven exchanges, interactions, transitions, and transfers that occur across individual to multiple pulses extending from micro to watershed scales. Climate change will likely alter the size, frequency, and intensity of precipitation pulses in the future, and arid-land ecosystems are known to be highly sensitive to climate variability. Thus, a more comprehensive understanding of arid-land pulse dynamics is needed to determine how these ecosystems will respond to, and be shaped by, increased climate variability.

  20. Off-axis impact of unidirectional composites with cracks: Dynamic stress intensification

    NASA Technical Reports Server (NTRS)

    Sih, G. C.; Chen, E. P.

    1979-01-01

    The dynamic response of unidirectional composites under off axis (angle loading) impact is analyzed by assuming that the composite contains an initial flaw in the matrix material. The analytical method utilizes Fourier transform for the space variable and Laplace transform for the time variable. The off axis impact is separated into two parts, one being symmetric and the other skew-symmetric with reference to the crack plane. Transient boundary conditions of normal and shear tractions are applied to a crack embedded in the matrix of the unidirectional composite. The two boundary conditions are solved independently and the results superimposed. Mathematically, these conditions reduce the problem to a system of dual integral equations which are solved in the Laplace transform plane for the transformation of the dynamic stress intensity factor. The time inversion is carried out numerically for various combinations of the material properties of the composite and the results are displayed graphically.

  1. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  2. An inexpensive frequency-modulated (FM) audio monitor of time-dependent analog parameters.

    PubMed

    Langdon, R B; Jacobs, R S

    1980-02-01

    The standard method for quantification and presentation of an experimental variable in real time is the use of visual display on the ordinate of an oscilloscope screen or chart recorder. This paper describes a relatively simple electronic circuit, using commercially available and inexpensive integrated circuits (IC), which generates an audible tone, the pitch of which varies in proportion to a running variable of interest. This device, which we call an "Audioscope," can accept as input the monitor output from any instrument that expresses an experimental parameter as a dc voltage. The Audioscope is particularly useful in implanting microelectrodes intracellularly. It may also function to mediate the first step in data recording on magnetic tape, and/or data analysis and reduction by electronic circuitary. We estimate that this device can be built, with two-channel capability, for less than $50, and in less than 10 hr by an experienced electronics technician.

  3. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  4. Children's motivation in elementary physical education: a longitudinal study.

    PubMed

    Xiang, Ping; McBride, Ron; Guan, Jianmin

    2004-03-01

    The present study examined relationships among variables drawn from achievement goal theory and the expectancy-value model of achievement choice as well as mean level changes of these variables over time in elementary physical education. Participants (N = 207) completed questionnaires over a 2-year period: once while in the second and fourth grades and again when they were in the third and fifth grades. Results indicated that achievement goals, expectancy-related beliefs, and subjective task values were related to one another and were predictive of children's intention for future participation in physical education. Children's subjective task values of physical education decreased over time. Children in Cohort 1 (across second to third grades) generally had stronger motivation for learning in physical education than children in Cohort 2 (across fourth to fifth grades). Findings suggest the importance of integrating achievement goal theory and the expectancy-value model of achievement choice in understanding student motivation.

  5. Equilibrium dynamical correlations in the Toda chain and other integrable models

    NASA Astrophysics Data System (ADS)

    Kundu, Aritra; Dhar, Abhishek

    2016-12-01

    We investigate the form of equilibrium spatiotemporal correlation functions of conserved quantities in the Toda lattice and in other integrable models. From numerical simulations we find that the correlations satisfy ballistic scaling with a remarkable collapse of data from different times. We examine special limiting choices of parameter values, for which the Toda lattice tends to either the harmonic chain or the equal mass hard-particle gas. In both these limiting cases, one can obtain the correlations exactly and we find excellent agreement with the direct Toda simulation results. We also discuss a transformation to "normal mode" variables, as commonly done in hydrodynamic theory of nonintegrable systems, and find that this is useful, to some extent, even for the integrable system. The striking differences between the Toda chain and a truncated version, expected to be nonintegrable, are pointed out.

  6. Equilibrium dynamical correlations in the Toda chain and other integrable models.

    PubMed

    Kundu, Aritra; Dhar, Abhishek

    2016-12-01

    We investigate the form of equilibrium spatiotemporal correlation functions of conserved quantities in the Toda lattice and in other integrable models. From numerical simulations we find that the correlations satisfy ballistic scaling with a remarkable collapse of data from different times. We examine special limiting choices of parameter values, for which the Toda lattice tends to either the harmonic chain or the equal mass hard-particle gas. In both these limiting cases, one can obtain the correlations exactly and we find excellent agreement with the direct Toda simulation results. We also discuss a transformation to "normal mode" variables, as commonly done in hydrodynamic theory of nonintegrable systems, and find that this is useful, to some extent, even for the integrable system. The striking differences between the Toda chain and a truncated version, expected to be nonintegrable, are pointed out.

  7. On climate prediction: how much can we expect from climate memory?

    NASA Astrophysics Data System (ADS)

    Yuan, Naiming; Huang, Yan; Duan, Jianping; Zhu, Congwen; Xoplaki, Elena; Luterbacher, Jürg

    2018-03-01

    Slowing variability in climate system is an important source of climate predictability. However, it is still challenging for current dynamical models to fully capture the variability as well as its impacts on future climate. In this study, instead of simulating the internal multi-scale oscillations in dynamical models, we discussed the effects of internal variability in terms of climate memory. By decomposing climate state x(t) at a certain time point t into memory part M(t) and non-memory part ɛ (t) , climate memory effects from the past 30 years on climate prediction are quantified. For variables with strong climate memory, high variance (over 20% ) in x(t) is explained by the memory part M(t), and the effects of climate memory are non-negligible for most climate variables, but the precipitation. Regarding of multi-steps climate prediction, a power law decay of the explained variance was found, indicating long-lasting climate memory effects. The explained variances by climate memory can remain to be higher than 10% for more than 10 time steps. Accordingly, past climate conditions can affect both short (monthly) and long-term (interannual, decadal, or even multidecadal) climate predictions. With the memory part M(t) precisely calculated from Fractional Integral Statistical Model, one only needs to focus on the non-memory part ɛ (t) , which is an important quantity that determines climate predictive skills.

  8. Balancing Area Coordination: Efficiently Integrating Renewable Energy Into the Grid, Greening the Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Jessica; Denholm, Paul; Cochran, Jaquelin

    2015-06-01

    Greening the Grid provides technical assistance to energy system planners, regulators, and grid operators to overcome challenges associated with integrating variable renewable energy into the grid. Coordinating balancing area operation can promote more cost and resource efficient integration of variable renewable energy, such as wind and solar, into power systems. This efficiency is achieved by sharing or coordinating balancing resources and operating reserves across larger geographic boundaries.

  9. Extracting climate memory using Fractional Integrated Statistical Model: A new perspective on climate prediction

    PubMed Central

    Yuan, Naiming; Fu, Zuntao; Liu, Shida

    2014-01-01

    Long term memory (LTM) in climate variability is studied by means of fractional integral techniques. By using a recently developed model, Fractional Integral Statistical Model (FISM), we in this report proposed a new method, with which one can estimate the long-lasting influences of historical climate states on the present time quantitatively, and further extract the influence as climate memory signals. To show the usability of this method, two examples, the Northern Hemisphere monthly Temperature Anomalies (NHTA) and the Pacific Decadal Oscillation index (PDO), are analyzed in this study. We find the climate memory signals indeed can be extracted and the whole variations can be further decomposed into two parts: the cumulative climate memory (CCM) and the weather-scale excitation (WSE). The stronger LTM is, the larger proportion the climate memory signals will account for in the whole variations. With the climate memory signals extracted, one can at least determine on what basis the considered time series will continue to change. Therefore, this report provides a new perspective on climate prediction. PMID:25300777

  10. Development of a new fertility prediction model for stallion semen, including flow cytometry.

    PubMed

    Barrier Battut, I; Kempfer, A; Becker, J; Lebailly, L; Camugli, S; Chevrier, L

    2016-09-01

    Several laboratories routinely use flow cytometry to evaluate stallion semen quality. However, objective and practical tools for the on-field interpretation of data concerning fertilizing potential are scarce. A panel of nine tests, evaluating a large number of compartments or functions of the spermatozoa: motility, morphology, viability, mitochondrial activity, oxidation level, acrosome integrity, DNA integrity, "organization" of the plasma membrane, and hypoosmotic resistance, was applied to a population of 43 stallions, 33 of which showing widely differing fertilities (19%-84% pregnancy rate per cycle [PRC]). Analyses were performed either within 2 hours after semen collection or after 24-hour storage at 4 °C in INRA96 extender, on three to six ejaculates for each stallion. The aim was to provide data on the distribution of values among said population, showing within-stallion and between-stallion variability, and to determine whether appropriate combinations of tests could evaluate the fertilizing potential of each stallion. Within-stallion repeatability, defined as intrastallion correlation (r = between-stallion variance/total variance) ranged between 0.29 and 0.84 for "conventional" variables (viability, morphology, and motility), and between 0.15 and 0.81 for "cytometric" variables. Those data suggested that analyzing six ejaculates would be adequate to characterize a stallion. For most variables, except those related to DNA integrity and some motility variables, results differed significantly between immediately performed analyses and analyses performed after 24 hours at 4 °C. Two "best-fit" combinations of variables were determined. Factorial discriminant analysis using a first combination of seven variables, including the polarization of mitochondria, acrosome integrity, DNA integrity, and hypoosmotic resistance, permitted exact determination of the fertility group for each stallion: fertile, that is, PRC higher than 55%; intermediate, that is, 45% < PRC less than 55%; or subfertile, that is, PRC less than 45%. Linear regression using another combination of 20 variables, including motility, viability, oxidation level, acrosome integrity, DNA integrity, and hypoosmotic resistance, accounted for 94.2% of the variability regarding fertility and was used to calculate a prediction of the PRC with a mean standard deviation of 3.1. The difference between the observed fertility and the calculated value ranged from -4.2 to 5.0. In conclusion, this study enabled to determine a new protocol for the evaluation of stallion semen, combining microscopical observation, computer-assisted motility analysis and flow cytometry, and providing a high level of fertility prediction. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  12. Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.

    PubMed

    Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat

    2014-01-01

    The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.

  13. Individual differences and time-varying features of modular brain architecture.

    PubMed

    Liao, Xuhong; Cao, Miao; Xia, Mingrui; He, Yong

    2017-05-15

    Recent studies have suggested that human brain functional networks are topologically organized into functionally specialized but inter-connected modules to facilitate efficient information processing and highly flexible cognitive function. However, these studies have mainly focused on group-level network modularity analyses using "static" functional connectivity approaches. How these extraordinary modular brain structures vary across individuals and spontaneously reconfigure over time remain largely unknown. Here, we employed multiband resting-state functional MRI data (N=105) from the Human Connectome Project and a graph-based modularity analysis to systematically investigate individual variability and dynamic properties in modular brain networks. We showed that the modular structures of brain networks dramatically vary across individuals, with higher modular variability primarily in the association cortex (e.g., fronto-parietal and attention systems) and lower variability in the primary systems. Moreover, brain regions spontaneously changed their module affiliations on a temporal scale of seconds, which cannot be simply attributable to head motion and sampling error. Interestingly, the spatial pattern of intra-subject dynamic modular variability largely overlapped with that of inter-subject modular variability, both of which were highly reproducible across repeated scanning sessions. Finally, the regions with remarkable individual/temporal modular variability were closely associated with network connectors and the number of cognitive components, suggesting a potential contribution to information integration and flexible cognitive function. Collectively, our findings highlight individual modular variability and the notable dynamic characteristics in large-scale brain networks, which enhance our understanding of the neural substrates underlying individual differences in a variety of cognition and behaviors. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Industry structures in private dental markets in Finland.

    PubMed

    Widström, E; Mikkola, H

    2012-12-01

    To use industrial organisation and organisational ecology research methods to survey industry structures and performance in the markets for private dental services and the effect of competition. Data on practice characteristics, performance, and perceived competition were collected from full-time private dentists (n = 1,121) using a questionnaire. The response rate was 59.6%. Cluster analysis was used to identify practice type based on service differentiation and process integration variables formulated from the questionnaire. Four strategic groups were identified in the Finnish markets: Solo practices formed one distinct group and group practices were classified into three clusters Integrated practices, Small practices, and Loosely integrated practices. Statistically significant differences were found in performance and perceived competitiveness between the groups. Integrated practices with the highest level of process integration and service differentiation performed better than solo and small practices. Moreover, loosely integrated and small practices outperformed solo practises. Competitive intensity was highest among small practices which had a low level of service differentiation and was above average among solo practises. Private dental care providers that had differentiated their services from public services and that had a high number of integrated service production processes enjoyed higher performance and less competitive pressures than those who had not.

  15. Social Cognitive Predictors of Pre-Service Teachers' Technology Integration Performance

    ERIC Educational Resources Information Center

    Perkmen, Serkan; Pamuk, Sonmez

    2011-01-01

    The main objective of the study was to examine interrelationships among social cognitive variables (self-efficacy, outcome expectations, and performance goals) and their role in predicting pre-service teachers' technology integration performance. Although researchers have examined the role of these variables in the teacher-education context, the…

  16. Integration of aerial imaging and variable-rate technology for site-specific aerial herbicide application

    USDA-ARS?s Scientific Manuscript database

    As remote sensing and variable rate technology are becoming more available for aerial applicators, practical methodologies on effective integration of these technologies are needed for site-specific aerial applications of crop production and protection materials. The objectives of this study were to...

  17. CHARACTERIZATION OF DATA VARIABILITY AND UNCERTAINTY: HEALTH EFFECTS ASSESSMENTS IN THE INTEGRATED RISK INFORMATION SYSTEM (IRIS)

    EPA Science Inventory

    In response to a Congressional directive contained in HR 106-379 regarding EPA's appropriations for FY2000, EPA has undertaken an evaluation of the characterization of data variability and uncertainty in its Integrated Risk Information System (IRIS) health effects information dat...

  18. Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume

    2013-01-01

    Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.

  19. Nonlinear integral sliding mode control design of photovoltaic pumping system: Real time implementation.

    PubMed

    Chihi, Asma; Ben Azza, Hechmi; Jemli, Mohamed; Sellami, Anis

    2017-09-01

    The aim of this paper is to provide high performance control of pumping system. The proposed method is designed by an indirect field oriented control based on Sliding Mode (SM) technique. The first contribution of this work is to design modified switching surfaces which presented by adding an integral action to the considered controlled variables. Then, in order to prevent the chattering phenomenon, modified nonlinear component is developed. The SM concept and a Lyapunov function are combined to compute the Sliding Mode Control (SMC) gains. Besides, the motor performance is validated by numeric simulations and real time implementation using a dSpace system with DS1104 controller board. Also, to show the effectiveness of the proposed approach, the obtained results are compared with other techniques such as conventional PI, Proportional Sliding Mode (PSM) and backstepping controls. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. An Integrated Gate Driver in 4H-SiC for Power Converter Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Milton Nance; Frank, Steven Shane; Britton, Charles

    2014-01-01

    A gate driver fabricated in a 2-um 4H silicon carbide (SiC) process is presented. This process was optimized for vertical power MOSFET fabrication but accommodated integration of a few low-voltage device types including N-channel MOSFETs, resistors, and capacitors. The gate driver topology employed incorporates an input level translator, variable power connections, and separate power supply connectivity allowing selection of the output signal drive amplitude. The output stage utilizes a source follower pull-up device that is both overdriven and body source connected to improve rise time behavior. Full characterization of this design driving a SiC power MOSFET is presented including risemore » and fall times, propagation delays, and power consumption. All parameters were measured to elevated temperatures exceeding 300 C. Details of the custom test system hardware and software utilized for gate driver testing are also provided.« less

  1. The influence of culture of honor and emotional intelligence in the acculturation of Moroccan immigrant women.

    PubMed

    Lopez-Zafra, Esther; El Ghoudani, Karima

    2014-01-01

    Migration is a normal process of people seeking new opportunities, work, or leisure in societies. The way people adapt to a new country (acculturation) is a complex process in which immigrants' evaluations about the culture of origin and their perceptions of the host country interact. The combination of these two factors produces four types of acculturation: separation, assimilation, integration, and marginalization. Several variables, such as personality, attitudes, and emotional intelligence, have been studied to help explain this process. However, the impact of a culture of honor and its interaction with other variables remains an open question that may help to explain how migrants can better adjust to their host culture. In this study, we examine the influence of the culture of honor (social) and emotional intelligence (individual) on acculturation. In a sample of 129 Moroccan women (mean age = 29, SD = 9.40) immigrants in Spain (mean time in Spain = 6 years, SD = 3.60), we investigated the relations among the variables of interest. Our results show that no significant differences emerged in the scores given for culture of honor (CH) and the acculturation strategies of the Moroccan immigrant women F(3, 99) = .233; p = .87. However women who preferred the integration strategy scored highest on emotional intelligence (EI), whereas the assimilated immigrants showed the lowest scores for EI F(3, 92) = 4.63; p = .005. Additionally, only in the case of integration does EI mediate between CH and the value given to the immigrant's own and host cultures (p <.001).

  2. Semantic Data Integration and Ontology Use within the Global Earth Observation System of Systems (GEOSS) Global Water Cycle Data Integration System

    NASA Astrophysics Data System (ADS)

    Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.

    2008-12-01

    The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so that they can cover an even wider net of data sources. The goal is to develop the means to link together the upper level and lower level ontologies and to have these registered within the GEOSS Registry. Actual operational ontologies that would link to models or link to data collections containing input variables required by models would have to be nested underneath this top level ontology, analogous to the mapping that has been carried out among ontologies within GEON.

  3. Spike timing precision of neuronal circuits.

    PubMed

    Kilinc, Deniz; Demir, Alper

    2018-06-01

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  4. Agile supply chain capabilities: emerging patterns as a determinant of competitive objectives

    NASA Astrophysics Data System (ADS)

    Yusuf, Yahaya Y.; Adeleye, E. O.; Sivayoganathan, K.

    2001-10-01

    Turbulent change caused by factors such as changing customer and technological requirements threatens manufacturers through lower product life cycles, profits and bleak survival prospects. Therefore, several companies are stressing flexibility and agility in order to respond, real time, to the unique needs of customers and markets. However, the resource competencies required are often difficult to mobilise and retain by single companies. It is therefore imperative for companies to co-operate and leverage complementary competencies. To this end, legally separate and spatially distributed companies are becoming integrated through Internet-based technologies. The paper reviews emerging patterns in supply chain integration. It also explores the relationship between the emerging patterns and attainment of competitive objectives. The results reported in the paper are based on data from a survey by questionnaire. The survey involved 600 companies in the UK, as part of a larger study of agile manufacturing. The study was driven by a conceptual model, which relates supply chain practices to competitive objectives. The analysis involves the use of factor analysis to reduce research variables to a few principal components. Subsequently, multiple regression was conducted to study the relationship amongst the reduced variables. The results validate the proposed conceptual model and lend credence to current thinking that supply chain integration is a vital tool for competitive advantage.

  5. Annual Atmospheric Corrosion of Carbon Steel Worldwide. An Integration of ISOCORRAG, ICP/UNECE and MICAT Databases

    PubMed Central

    Chico, Belén; de la Fuente, Daniel; Díaz, Iván; Simancas, Joaquín; Morcillo, Manuel

    2017-01-01

    In the 1980s, three ambitious international programmes on atmospheric corrosion (ISOCORRAG, ICP/UNECE and MICAT), involving the participation of a total of 38 countries on four continents, Europe, America, Asia and Oceania, were launched. Though each programme has its own particular characteristics, the similarity of the basic methodologies used makes it possible to integrate the databases obtained in each case. This paper addresses such an integration with the aim of establishing simple universal damage functions (DF) between first year carbon steel corrosion in the different atmospheres and available environmental variables, both meteorological (temperature (T), relative humidity (RH), precipitation (P), and time of wetness (TOW)) and pollution (SO2 and NaCl). In the statistical processing of the data, it has been chosen to differentiate between marine atmospheres and those in which the chloride deposition rate is insignificant (<3 mg/m2.d). In the DF established for non-marine atmospheres a great influence of the SO2 content in the atmosphere was seen, as well as lesser effects by the meteorological parameters of RH and T. Both NaCl and SO2 pollutants, in that order, are seen to be the most influential variables in marine atmospheres, along with a smaller impact of TOW. PMID:28772966

  6. Advances in deep-UV processing using cluster tools

    NASA Astrophysics Data System (ADS)

    Escher, Gary C.; Tepolt, Gary; Mohondro, Robert D.

    1993-09-01

    Deep-UV laser lithography has shown the capability of supporting the manufacture of multiple generations of integrated circuits (ICs) due to its wide process latitude and depth of focus (DOF) for 0.2 micrometers to 0.5 micrometers feature sizes. This capability has been attained through improvements in deep-UV wide field lens technology, excimer lasers, steppers and chemically amplified, positive deep-UV resists. Chemically amplified deep-UV resists are required for 248 nm lithography due to the poor absorption and sensitivity of conventional novolac resists. The acid catalyzation processes of the new resists requires control of the thermal history and environmental conditions of the lithographic process. Work is currently underway at several resist vendors to reduce the need for these controls, but practical manufacturing solutions exist today. One of these solutions is the integration of steppers and resist tracks into a `cluster tool' or `Lithocell' to insure a consistent thermal profile for the resist process and reduce the time the resist is exposed to atmospheric contamination. The work here reports processing and system integration results with a Machine Technology, Inc (MTI) post-exposure bake (PEB) track interfaced with an advanced GCA XLS 7800 deep-UV stepper [31 mm diameter, variable NA (0.35 - 0.53) and variable sigma (0.3 - 0.74)].

  7. Effects of tetrabrombisphenol A on DNA integrity, oxidative stress, and sterlet (Acipenser ruthenus) spermatozoa quality variables.

    PubMed

    Linhartova, Pavla; Gazo, Ievgeniia; Shaliutina-Kolesova, Anna; Hulak, Martin; Kaspar, Vojtech

    2015-07-01

    The sperm of sterlet (Acispenser ruthenus) was used to investigate the effect of the xenobiotic tetrabrombisphenol A (TBBPA) on sperm quality variables (ATP content, spermatozoa motility, and velocity), DNA integrity, and oxidative stress indices. Sperm was diluted to obtain a spermatozoa density of 5 × 10(8) cells/mL and exposed for 2 h to final concentrations of TBBPA (0.5, 1.75, 2.5, 5, and 10 μg/L). The oxidative stress indices, including lipid peroxidation, carbonyl derivatives of proteins, and antioxidant activity were significantly higher with increased concentrations of TBBPA. There was significantly less intracellular ATP in sperm samples at TBBPA concentrations of 2.5 μg/L and above. Spermatozoa velocity and percent motile sperm were significantly lower at each sampling time post-activation compared to controls. DNA damage expressed as percent DNA in Tail and Olive Tail moment was significantly higher with exposures ≥2.5 μg/L TBBPA. The results demonstrated that TBBPA and other xenobiotics can induce reactive oxygen species stress in fish spermatozoa, which could impair the sperm quality, DNA integrity, ATP content, and the antioxidant defense system. This study confirmed that fish spermatozoa can be used in in vitro assays for monitoring residual pollution in aquatic environments. © 2014 Wiley Periodicals, Inc.

  8. Research on the Integration of Bionic Geometry Modeling and Simulation of Robot Foot Based on Characteristic Curve

    NASA Astrophysics Data System (ADS)

    He, G.; Zhu, H.; Xu, J.; Gao, K.; Zhu, D.

    2017-09-01

    The bionic research of shape is an important aspect of the research on bionic robot, and its implementation cannot be separated from the shape modeling and numerical simulation of the bionic object, which is tedious and time-consuming. In order to improve the efficiency of shape bionic design, the feet of animals living in soft soil and swamp environment are taken as bionic objects, and characteristic skeleton curve, section curve, joint rotation variable, position and other parameters are used to describe the shape and position information of bionic object’s sole, toes and flipper. The geometry modeling of the bionic object is established by using the parameterization of characteristic curves and variables. Based on this, the integration framework of parametric modeling and finite element modeling, dynamic analysis and post-processing of sinking process in soil is proposed in this paper. The examples of bionic ostrich foot and bionic duck foot are also given. The parametric modeling and integration technique can achieve rapid improved design based on bionic object, and it can also greatly improve the efficiency and quality of robot foot bionic design, and has important practical significance to improve the level of bionic design of robot foot’s shape and structure.

  9. Measuring Zonal Transport Variability of the Antarctic Circumpolar Current Using GRACE Ocean Bottom Pressure

    NASA Astrophysics Data System (ADS)

    Makowski, J.; Chambers, D. P.; Bonin, J. A.

    2012-12-01

    Previous studies have suggested that ocean bottom pressure (OBP) can be used to measure the transport variability of the Antarctic Circumpolar Current (ACC). Using OBP data from the JPL ECCO model and the Gravity Recovery and Climate Experiment (GRACE), we examine the zonal transport variability of the ACC integrated between the major fronts between 2003-2010. The JPL ECCO data are used to determine average front positions for the time period studies, as well as where transport is mainly zonal. Statistical analysis will be conducted to determine the uncertainty of the GRACE observations using a simulated data set. We will also begin looking at low frequency changes and how coherent transport variability is from region to region of the ACC. Correlations with bottom pressure south of the ACC and the average basin transports will also be calculated to determine the probability of using bottom pressure south of the ACC as a means for describing the ACC dynamics and transport.

  10. Using Equation-Free Computation to Accelerate Network-Free Stochastic Simulation of Chemical Kinetics.

    PubMed

    Lin, Yen Ting; Chylek, Lily A; Lemons, Nathan W; Hlavacek, William S

    2018-06-21

    The chemical kinetics of many complex systems can be concisely represented by reaction rules, which can be used to generate reaction events via a kinetic Monte Carlo method that has been termed network-free simulation. Here, we demonstrate accelerated network-free simulation through a novel approach to equation-free computation. In this process, variables are introduced that approximately capture system state. Derivatives of these variables are estimated using short bursts of exact stochastic simulation and finite differencing. The variables are then projected forward in time via a numerical integration scheme, after which a new exact stochastic simulation is initialized and the whole process repeats. The projection step increases efficiency by bypassing the firing of numerous individual reaction events. As we show, the projected variables may be defined as populations of building blocks of chemical species. The maximal number of connected molecules included in these building blocks determines the degree of approximation. Equation-free acceleration of network-free simulation is found to be both accurate and efficient.

  11. Predicting alcohol consumption and binge drinking in company employees: an application of planned behaviour and self-determination theories.

    PubMed

    Hagger, Martin S; Lonsdale, Adam J; Hein, Vello; Koka, Andre; Lintunen, Taru; Pasi, Heidi; Lindwall, Magnus; Rudolfsson, Lisa; Chatzisarantis, Nikos L D

    2012-05-01

    This study tested an integrated model of the psychosocial determinants of alcohol-related behaviour among company employees from four nations. A motivational sequence was proposed in which motivational orientations from self-determination theory influenced intentions to consume alcohol within guideline limits and alcohol-related behaviour via the mediation of the theory of planned behaviour variables of attitude, subjective norms, and perceived behavioural control (PBC). A three-wave prospective design using self-reported psychological and behavioural measures. Company employees (N= 486, males = 225, females = 261; M age = 30.41, SD= 8.31) from four nations (Estonia, Finland, Sweden, and UK) completed measures of autonomous and controlled motivation from self-determination theory, attitudes, subjective norms, PBC, intentions from the theory of planned behaviour, and self-reported measures of past alcohol consumption and binge-drinking occasions at the first time point (time 1). Follow-up psychological and behavioural measures were taken one month later (time 2) and follow-up behavioural measures taken a further 2 months later (time 3). Path analyses supported the motivational sequence with identified regulation (time 1), predicting intentions (time 1), and alcohol units consumed (time 2). The effects were indirect via the mediation of attitudes and PBC (time 1). A similar pattern of effects was found for the effect of time 2 psychological variables on time 3 units of alcohol consumed. There was little support for the effects of the psychological variables on binge-drinking behaviour. Findings provide new information on the psychosocial determinants of alcohol behaviour in company employees and the processes involved. Results may provide impetus for the development of interventions to reduce alcohol consumption. ©2011 The British Psychological Society.

  12. Further distinctive investigations of the Sumudu transform

    NASA Astrophysics Data System (ADS)

    Belgacem, Fethi Bin Muhammad; Silambarasan, Rathinavel

    2017-01-01

    The Sumudu transform of time function f (t) is computed by making the transform variable u of Sumudu as factor of function f (t) and then integrated against exp(-t). Being a factor in the original function f (t), becomes f (ut) preserves units and dimension. This preservation property distinguishes Sumudu from other integral transforms. With obtained definition, the related complete set of properties were derived for the Sumudu transform. Framgment of Symbolic C++ program was given for Sumudu computation as series. Also procedure in Maple was given for Sumudu computation in closed form. The Method proposed herein not depends neither on any of homotopy methods such as HPM, HAM nor any of decomposition methods such as ADM.

  13. High-quality weather data for grid integration studies

    NASA Astrophysics Data System (ADS)

    Draxl, C.

    2016-12-01

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. In this talk we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather prediction to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets will be presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The Solar Integration National Dataset (SIND) is available as time synchronized with the WIND Toolkit, and will allow for combined wind-solar grid integration studies. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. Grid integration studies are also carried out in various countries, which aim at increasing their wind and solar penetration through combined wind and solar integration data sets. We will present a multi-year effort to directly support India's 24x7 energy access goal through a suite of activities aimed at enabling large-scale deployment of clean energy and energy efficiency. Another current effort is the North-American-Renewable-Integration-Study, with the aim of providing a seamless data set across borders for a whole continent, to simulate and analyze the impacts of potential future large wind and solar power penetrations on bulk power system operations.

  14. Concept for facilitating analyst-mediated interpretation of qualitative chromatographic-mass spectral data: an alternative to manual examination of extracted ion chromatograms.

    PubMed

    Borges, Chad R

    2007-07-01

    A chemometrics-based data analysis concept has been developed as a substitute for manual inspection of extracted ion chromatograms (XICs), which facilitates rapid, analyst-mediated interpretation of GC- and LC/MS(n) data sets from samples undergoing qualitative batchwise screening for prespecified sets of analytes. Automatic preparation of data into two-dimensional row space-derived scatter plots (row space plots) eliminates the need to manually interpret hundreds to thousands of XICs per batch of samples while keeping all interpretation of raw data directly in the hands of the analyst-saving great quantities of human time without loss of integrity in the data analysis process. For a given analyte, two analyte-specific variables are automatically collected by a computer algorithm and placed into a data matrix (i.e., placed into row space): the first variable is the ion abundance corresponding to scan number x and analyte-specific m/z value y, and the second variable is the ion abundance corresponding to scan number x and analyte-specific m/z value z (a second ion). These two variables serve as the two axes of the aforementioned row space plots. In order to collect appropriate scan number (retention time) information, it is necessary to analyze, as part of every batch, a sample containing a mixture of all analytes to be tested. When pure standard materials of tested analytes are unavailable, but representative ion m/z values are known and retention time can be approximated, data are evaluated based on two-dimensional scores plots from principal component analysis of small time range(s) of mass spectral data. The time-saving efficiency of this concept is directly proportional to the percentage of negative samples and to the total number of samples processed simultaneously.

  15. Sparse representation based biomarker selection for schizophrenia with integrated analysis of fMRI and SNPs.

    PubMed

    Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping

    2014-11-15

    Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different natures of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759,075 SNPs and 153,594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for uni-variant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Junjie; Wu, Xin; Huang, Guoqing

    2017-01-01

    We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step thanmore » the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.« less

  17. Optimal information networks: Application for data-driven integrated health in populations

    PubMed Central

    Servadio, Joseph L.; Convertino, Matteo

    2018-01-01

    Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440

  18. Automated multivariate analysis of multi-sensor data submitted online: Real-time environmental monitoring.

    PubMed

    Eide, Ingvar; Westad, Frank

    2018-01-01

    A pilot study demonstrating real-time environmental monitoring with automated multivariate analysis of multi-sensor data submitted online has been performed at the cabled LoVe Ocean Observatory located at 258 m depth 20 km off the coast of Lofoten-Vesterålen, Norway. The major purpose was efficient monitoring of many variables simultaneously and early detection of changes and time-trends in the overall response pattern before changes were evident in individual variables. The pilot study was performed with 12 sensors from May 16 to August 31, 2015. The sensors provided data for chlorophyll, turbidity, conductivity, temperature (three sensors), salinity (calculated from temperature and conductivity), biomass at three different depth intervals (5-50, 50-120, 120-250 m), and current speed measured in two directions (east and north) using two sensors covering different depths with overlap. A total of 88 variables were monitored, 78 from the two current speed sensors. The time-resolution varied, thus the data had to be aligned to a common time resolution. After alignment, the data were interpreted using principal component analysis (PCA). Initially, a calibration model was established using data from May 16 to July 31. The data on current speed from two sensors were subject to two separate PCA models and the score vectors from these two models were combined with the other 10 variables in a multi-block PCA model. The observations from August were projected on the calibration model consecutively one at a time and the result was visualized in a score plot. Automated PCA of multi-sensor data submitted online is illustrated with an attached time-lapse video covering the relative short time period used in the pilot study. Methods for statistical validation, and warning and alarm limits are described. Redundant sensors enable sensor diagnostics and quality assurance. In a future perspective, the concept may be used in integrated environmental monitoring.

  19. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System.

    PubMed

    Zhang, Zhen; Ma, Cheng; Zhu, Rong

    2017-08-23

    Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas.

  20. A FPGA-Based, Granularity-Variable Neuromorphic Processor and Its Application in a MIMO Real-Time Control System

    PubMed Central

    Zhang, Zhen; Zhu, Rong

    2017-01-01

    Artificial Neural Networks (ANNs), including Deep Neural Networks (DNNs), have become the state-of-the-art methods in machine learning and achieved amazing success in speech recognition, visual object recognition, and many other domains. There are several hardware platforms for developing accelerated implementation of ANN models. Since Field Programmable Gate Array (FPGA) architectures are flexible and can provide high performance per watt of power consumption, they have drawn a number of applications from scientists. In this paper, we propose a FPGA-based, granularity-variable neuromorphic processor (FBGVNP). The traits of FBGVNP can be summarized as granularity variability, scalability, integrated computing, and addressing ability: first, the number of neurons is variable rather than constant in one core; second, the multi-core network scale can be extended in various forms; third, the neuron addressing and computing processes are executed simultaneously. These make the processor more flexible and better suited for different applications. Moreover, a neural network-based controller is mapped to FBGVNP and applied in a multi-input, multi-output, (MIMO) real-time, temperature-sensing and control system. Experiments validate the effectiveness of the neuromorphic processor. The FBGVNP provides a new scheme for building ANNs, which is flexible, highly energy-efficient, and can be applied in many areas. PMID:28832522

  1. Inner and outer segment junction (IS/OS line) integrity in ocular Behçet's disease.

    PubMed

    Yüksel, Harun; Türkcü, Fatih M; Sahin, Muhammed; Cinar, Yasin; Cingü, Abdullah K; Ozkurt, Zeynep; Sahin, Alparslan; Ari, Seyhmus; Caça, Ihsan

    2014-08-01

    In this study, we examined the spectral domain optical coherence tomography (OCT) findings of ocular Behçet's disease (OB) in patients with inactive uveitis. Specifically, we analyzed the inner and outer segment junction (IS/OS line) integrity and the effect of disturbed IS/OS line integrity on visual acuity. Patient files and OCT images of OB patients who had been followed-up between January and June of the year 2013 at the Dicle University Eye Clinic were evaluated retrospectively. Sixty-six eyes of 39 patients were included the study. OCT examination of the patients with inactive OB revealed that approximately 25% of the patients had disturbed IS/OS and external limiting membrane (EML) line integrity, lower visual acuity (VA), and lower macular thickness than others. Linear regression analysis revealed that macular thickness was not an independent variable for VA. In contrast, the IS/OS line integrity was an independent variable for VA in inactive OB patients. In this study, we showed that the IS/OS line integrity was an independent variable for VA in inactive OB patients. Further prospective studies are needed to evaluate the integrity of the IS/OS line in OB patients.

  2. A GeoNode-Based Multiscale Platform For Management, Visualization And Integration Of DInSAR Data With Different Geospatial Information Sources

    NASA Astrophysics Data System (ADS)

    Buonanno, Sabatino; Fusco, Adele; Zeni, Giovanni; Manunta, Michele; Lanari, Riccardo

    2017-04-01

    This work describes the implementation of an efficient system for managing, viewing, analyzing and updating remotely sensed data, with special reference to Differential Interferometric Synthetic Aperture Radar (DInSAR) data. The DInSAR products measure Earth surface deformation both in space and time, producing deformation maps and time series[1,2]. The use of these data in research or operational contexts requires tools that have to handle temporal and spatial variability with high efficiency. For this aim we present an implementation based on Spatial Data Infrastructure (SDI) for data integration, management and interchange, by using standard protocols[3]. SDI tools provide access to static datasets that operate only with spatial variability . In this paper we use the open source project GeoNode as framework to extend SDI infrastructure functionalities to ingest very efficiently DInSAR deformation maps and deformation time series. GeoNode allows to realize comprehensive and distributed infrastructure, following the standards of the Open Geospatial Consortium, Inc. - OGC, for remote sensing data management, analysis and integration [4,5]. In the current paper we explain the methodology used for manage the data complexity and data integration using the opens source project GeoNode. The solution presented in this work for the ingestion of DinSAR products is a very promising starting point for future developments of the OGC compliant implementation of a semi-automatic remote sensing data processing chain . [1] Berardino, P., Fornaro, G., Lanari, R., & Sansosti, E. (2002). A new Algorithm for Surface Deformation Monitoring based on Small Baseline Differential SAR Interferograms. IEEE Transactions on Geoscience and Remote Sensing, 40, 11, pp. 2375-2383. [2] Lanari R., F. Casu, M. Manzo, G. Zeni,, P. Berardino, M. Manunta and A. Pepe (2007), An overview of the Small Baseline Subset Algorithm: a DInSAR Technique for Surface Deformation Analysis, P. Appl. Geophys., 164, doi: 10.1007/s00024-007-0192-9. [3] Nebert, D.D. (ed). 2000. Developing Spatial data Infrastructures: The SDI Cookbook. [4] Geonode (www.geonode.org) [5] Kolodziej, k. (ed). 2004. OGC OpenGIS Web Map Server Cookbook. Open Geospatial Consortium, 1.0.2 edition.

  3. Terrestrial Waters and Sea Level Variations on Interannual Time Scale

    NASA Technical Reports Server (NTRS)

    Llovel, W.; Becker, M.; Cazenave, A.; Jevrejeva, S.; Alkama, R.; Decharme, B.; Douville, H.; Ablain, M.; Beckley, B.

    2011-01-01

    On decadal to multi-decadal time scales, thermal expansion of sea waters and land ice loss are the main contributors to sea level variations. However, modification of the terrestrial water cycle due to climate variability and direct anthropogenic forcing may also affect sea level. For the past decades, variations in land water storage and corresponding effects on sea level cannot be directly estimated from observations because these are almost non-existent at global continental scale. However, global hydrological models developed for atmospheric and climatic studies can be used for estimating total water storage. For the recent years (since mid-2002), terrestrial water storage change can be directly estimated from observations of the GRACE space gravimetry mission. In this study, we analyse the interannual variability of total land water storage, and investigate its contribution to mean sea level variability at interannual time scale. We consider three different periods that, each, depend on data availability: (1) GRACE era (2003-2009), (2) 1993-2003 and (3) 1955-1995. For the GRACE era (period 1), change in land water storage is estimated using different GRACE products over the 33 largest river basins worldwide. For periods 2 and 3, we use outputs from the ISBA-TRIP (Interactions between Soil, Biosphere, and Atmosphere-Total Runoff Integrating Pathways) global hydrological model. For each time span, we compare change in land water storage (expressed in sea level equivalent) to observed mean sea level, either from satellite altimetry (periods 1 and 2) or tide gauge records (period 3). For each data set and each time span, a trend has been removed as we focus on the interannual variability. We show that whatever the period considered, interannual variability of the mean sea level is essentially explained by interannual fluctuations in land water storage, with the largest contributions arising from tropical river basins.

  4. Influence of Boar and Semen Parameters on Motility and Acrosome Integrity in Liquid Boar Semen Stored for Five Days

    PubMed Central

    2002-01-01

    Ninety ejaculates from a total of 76 AI boars were extended in Beltsville Thawing Solution (BTS). Boar identity, breed, weight of the ejaculate and sperm concentration were registered. Motility and acrosome integrity were assessed after storage at 16–18°C for 6, 30, 54, 78, and 102 h. Storage time had a significant influence on both motility (p < 0.01) and acrosome integrity (p < 0.001). The Least Square Means for percentage of motility showed a small decline from 79.8% after 6 h of storage to 78.4% at 102 h. Motility at 78 and 102 h was significantly different from motility at 6 h (p < 0.05). The percentage of sperm cells with normal acrosomes declined throughout the experiment. The Least Square Means for 6, 30, 54, 78, and 102 h of storage were 93.9%, 90.6%, 88.0%, 84.8%, and 78.2%, respectively. The decrease in acrosome integrity from one storage time to the next was highly significant throughout the trial (p < 0.001). There was a significant influence of boar (p < 0.001) and sperm concentration (p < 0.01) on motility, while acrosome integrity was affected only by boar (p < 0.001). Breed of the boars and weight of the ejaculate did not influence the dependent variables. PMID:12071116

  5. Integrated versus nOn-integrated Peripheral inTravenous catheter. Which Is the most effective systeM for peripheral intravenoUs catheter Management? (The OPTIMUM study): a randomised controlled trial protocol

    PubMed Central

    Castillo, Maria Isabel; Larsen, Emily; Cooke, Marie; Marsh, Nicole M; Wallis, Marianne C; Finucane, Julie; Brown, Peter; Mihala, Gabor; Byrnes, Joshua; Walker, Rachel; Cable, Prudence; Zhang, Li; Sear, Candi; Jackson, Gavin; Rowsome, Anna; Ryan, Alison; Humphries, Julie C; Sivyer, Susan; Flanigan, Kathy; Rickard, Claire M

    2018-01-01

    Introduction Peripheral intravenous catheters (PIVCs) are frequently used in hospitals. However, PIVC complications are common, with failures leading to treatment delays, additional procedures, patient pain and discomfort, increased clinician workload and substantially increased healthcare costs. Recent evidence suggests integrated PIVC systems may be more effective than traditional non-integrated PIVC systems in reducing phlebitis, infiltration and costs and increasing functional dwell time. The study aim is to determine the efficacy, cost–utility and acceptability to patients and professionals of an integrated PIVC system compared with a non-integrated PIVC system. Methods and analysis Two-arm, multicentre, randomised controlled superiority trial of integrated versus non-integrated PIVC systems to compare effectiveness on clinical and economic outcomes. Recruitment of 1560 patients over 2 years, with randomisation by a centralised service ensuring allocation concealment. Primary outcomes: catheter failure (composite endpoint) for reasons of: occlusion, infiltration/extravasation, phlebitis/thrombophlebitis, dislodgement, localised or catheter-associated bloodstream infections. Secondary outcomes: first time insertion success, types of PIVC failure, device colonisation, insertion pain, functional dwell time, adverse events, mortality, cost–utility and consumer acceptability. One PIVC per patient will be included, with intention-to-treat analysis. Baseline group comparisons will be made for potentially clinically important confounders. The proportional hazards assumption will be checked, and Cox regression will test the effect of group, patient, device and clinical variables on failure. An as-treated analysis will assess the effect of protocol violations. Kaplan-Meier survival curves with log-rank tests will compare failure by group over time. Secondary endpoints will be compared between groups using parametric/non-parametric techniques. Ethics and dissemination Ethical approval from the Royal Brisbane and Women’s Hospital Human Research Ethics Committee (HREC/16/QRBW/527), Griffith University Human Research Ethics Committee (Ref No. 2017/002) and the South Metropolitan Health Services Human Research Ethics Committee (Ref No. 2016–239). Results will be published in peer-reviewed journals. Trial registration number ACTRN12617000089336. PMID:29764876

  6. Integrated versus nOn-integrated Peripheral inTravenous catheter. Which Is the most effective systeM for peripheral intravenoUs catheter Management? (The OPTIMUM study): a randomised controlled trial protocol.

    PubMed

    Castillo, Maria Isabel; Larsen, Emily; Cooke, Marie; Marsh, Nicole M; Wallis, Marianne C; Finucane, Julie; Brown, Peter; Mihala, Gabor; Carr, Peter J; Byrnes, Joshua; Walker, Rachel; Cable, Prudence; Zhang, Li; Sear, Candi; Jackson, Gavin; Rowsome, Anna; Ryan, Alison; Humphries, Julie C; Sivyer, Susan; Flanigan, Kathy; Rickard, Claire M

    2018-05-14

    Peripheral intravenous catheters (PIVCs) are frequently used in hospitals. However, PIVC complications are common, with failures leading to treatment delays, additional procedures, patient pain and discomfort, increased clinician workload and substantially increased healthcare costs. Recent evidence suggests integrated PIVC systems may be more effective than traditional non-integrated PIVC systems in reducing phlebitis, infiltration and costs and increasing functional dwell time. The study aim is to determine the efficacy, cost-utility and acceptability to patients and professionals of an integrated PIVC system compared with a non-integrated PIVC system. Two-arm, multicentre, randomised controlled superiority trial of integrated versus non-integrated PIVC systems to compare effectiveness on clinical and economic outcomes. Recruitment of 1560 patients over 2 years, with randomisation by a centralised service ensuring allocation concealment. Primary outcomes: catheter failure (composite endpoint) for reasons of: occlusion, infiltration/extravasation, phlebitis/thrombophlebitis, dislodgement, localised or catheter-associated bloodstream infections. first time insertion success, types of PIVC failure, device colonisation, insertion pain, functional dwell time, adverse events, mortality, cost-utility and consumer acceptability. One PIVC per patient will be included, with intention-to-treat analysis. Baseline group comparisons will be made for potentially clinically important confounders. The proportional hazards assumption will be checked, and Cox regression will test the effect of group, patient, device and clinical variables on failure. An as-treated analysis will assess the effect of protocol violations. Kaplan-Meier survival curves with log-rank tests will compare failure by group over time. Secondary endpoints will be compared between groups using parametric/non-parametric techniques. Ethical approval from the Royal Brisbane and Women's Hospital Human Research Ethics Committee (HREC/16/QRBW/527), Griffith University Human Research Ethics Committee (Ref No. 2017/002) and the South Metropolitan Health Services Human Research Ethics Committee (Ref No. 2016-239). Results will be published in peer-reviewed journals. ACTRN12617000089336. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Integrating high levels of variable renewable energy into electric power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin

    As more variable renewable energy (VRE) such as wind and solar are integrated into electric power systems, technical challenges arise from the need to maintain the balance between load and generation at all timescales. This paper examines the challenges with integrating ultra-high levels of VRE into electric power system, reviews a range of solutions to these challenges, and provides a description of several examples of ultra-high VRE systems that are in operation today.

  8. Integrating high levels of variable renewable energy into electric power systems

    DOE PAGES

    Kroposki, Benjamin

    2017-11-17

    As more variable renewable energy (VRE) such as wind and solar are integrated into electric power systems, technical challenges arise from the need to maintain the balance between load and generation at all timescales. This paper examines the challenges with integrating ultra-high levels of VRE into electric power system, reviews a range of solutions to these challenges, and provides a description of several examples of ultra-high VRE systems that are in operation today.

  9. Hypnosis, suggestion, and suggestibility: an integrative model.

    PubMed

    Lynn, Steven Jay; Laurence, Jean-Roch; Kirsch, Irving

    2015-01-01

    This article elucidates an integrative model of hypnosis that integrates social, cultural, cognitive, and neurophysiological variables at play both in and out of hypnosis and considers their dynamic interaction as determinants of the multifaceted experience of hypnosis. The roles of these variables are examined in the induction and suggestion stages of hypnosis, including how they are related to the experience of involuntariness, one of the hallmarks of hypnosis. It is suggested that studies of the modification of hypnotic suggestibility; cognitive flexibility; response sets and expectancies; the default-mode network; and the search for the neurophysiological correlates of hypnosis, more broadly, in conjunction with research on social psychological variables, hold much promise to further understanding of hypnosis.

  10. Variable input observer for structural health monitoring of high-rate systems

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob

    2017-02-01

    The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.

  11. Inflow forecasting model construction with stochastic time series for coordinated dam operation

    NASA Astrophysics Data System (ADS)

    Kim, T.; Jung, Y.; Kim, H.; Heo, J. H.

    2014-12-01

    Dam inflow forecasting is one of the most important tasks in dam operation for an effective water resources management and control. In general, dam inflow forecasting with stochastic time series model is possible to apply when the data is stationary because most of stochastic process based on stationarity. However, recent hydrological data cannot be satisfied the stationarity anymore because of climate change. Therefore a stochastic time series model, which can consider seasonality and trend in the data series, named SARIMAX(Seasonal Autoregressive Integrated Average with eXternal variable) model were constructed in this study. This SARIMAX model could increase the performance of stochastic time series model by considering the nonstationarity components and external variable such as precipitation. For application, the models were constructed for four coordinated dams on Han river in South Korea with monthly time series data. As a result, the models of each dam have similar performance and it would be possible to use the model for coordinated dam operation.Acknowledgement This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-NH-12-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  12. Effect of residence times on River Mondego estuary eutrophication vulnerability.

    PubMed

    Duarte, A S; Pinho, J L; Pardal, M A; Neto, J M; Vieira, J P; Santos, F S

    2001-01-01

    The south arm of the Mondego estuary, located in the central western Atlantic coast of Portugal, is almost silted up in the upstream area. So, the water circulation is mostly driven by tides and the tributary river Pranto discharges. Eutrophication has been taking place in this ecosystem during last twelve years, where macroalgae reach a luxuriant development covering a significant area of the intertidal muddy flat. A sampling program was carried out from June 1993 to June 1994. Available data on salinity profiles and on nutrients loading into the south arm were used in order to get a better understanding of the ongoing changes. River Pranto flow discharges, controlled by a sluice, were also monitored. Integral formulations are typically based on assumptions of steady state and well-mixed systems and thus cannot take into account the space and time variability of estuarine residence times, due to river discharge flow, tidal coefficients, discharge(s) location and time of release during the tidal cycle. This work presents the hydrodynamics modelling (2D-H) of this system in order to estimate the residence times variability and to assess their effect on the estuarine eutrophication vulnerability, contributing to better environmental management strategies selection.

  13. Effect of metrology time delay on overlay APC

    NASA Astrophysics Data System (ADS)

    Carlson, Alan; DiBiase, Debra

    2002-07-01

    The run-to-run control strategy of lithography APC is primarily composed of a feedback loop as shown in the diagram below. It is known that the insertion of a time delay in a feedback loop can cause degradation in control performance and could even cause a stable system to become unstable, if the time delay becomes sufficiently large. Many proponents of integrated metrology methods have cited the damage caused by metrology time delays as the primary justification for moving from a stand-alone to integrated metrology. While there is little dispute over the qualitative form of this argument, there has been very light published about the quantitative effects under real fab conditions - precisely how much control is lost due to these time delays. Another issue regarding time delays is that the length of these delays is not typically fixed - they vary from lot to lot and in some cases this variance can be large - from one hour on the short side to over 32 hours on the long side. Concern has been expressed that the variability in metrology time delays can cause undesirable dynamics in feedback loops that make it difficult to optimize feedback filters and gains and at worst could drive a system unstable. By using data from numerous fabs, spanning many sizes and styles of operation, we have conducted a quantitative study of the time delay effect on overlay run- to-run control. Our analysis resulted in the following conclusions: (1) There is a significant and material relationship between metrology time delay and overlay control under a variety of real world production conditions. (2) The run-to-run controller can be configured to minimize sensitivity to time delay variations. (3) The value of moving to integrated metrology can be quantified.

  14. An Artificial Neural Network Embedded Position and Orientation Determination Algorithm for Low Cost MEMS INS/GPS Integrated Sensors

    PubMed Central

    Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen

    2009-01-01

    Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034

  15. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  16. Rationale and Design of the Registry for Stones of the Kidney and Ureter (ReSKU): A Prospective Observational Registry to Study the Natural History of Urolithiasis Patients.

    PubMed

    Chang, Helena C; Tzou, David T; Usawachintachit, Manint; Duty, Brian D; Hsi, Ryan S; Harper, Jonathan D; Sorensen, Mathew D; Stoller, Marshall L; Sur, Roger L; Chi, Thomas

    2016-12-01

    Registry-based clinical research in nephrolithiasis is critical to advancing quality in urinary stone disease management and ultimately reducing stone recurrence. A need exists to develop Health Insurance Portability and Accountability Act (HIPAA)-compliant registries that comprise integrated electronic health record (EHR) data using prospectively defined variables. An EHR-based standardized patient database-the Registry for Stones of the Kidney and Ureter (ReSKU™)-was developed, and herein we describe our implementation outcomes. Interviews with academic and community endourologists in the United States, Canada, China, and Japan identified demographic, intraoperative, and perioperative variables to populate our registry. Variables were incorporated into a HIPAA-compliant Research Electronic Data Capture database linked to text prompts and registration data within the Epic EHR platform. Specific data collection instruments supporting New patient, Surgery, Postoperative, and Follow-up clinical encounters were created within Epic to facilitate automated data extraction into ReSKU. The number of variables within each instrument includes the following: New patient-60, Surgery-80, Postoperative-64, and Follow-up-64. With manual data entry, the mean times to complete each of the clinic-based instruments were (minutes) as follows: New patient-12.06 ± 2.30, Postoperative-7.18 ± 1.02, and Follow-up-8.10 ± 0.58. These times were significantly reduced with the use of ReSKU structured clinic note templates to the following: New patient-4.09 ± 1.73, Postoperative-1.41 ± 0.41, and Follow-up-0.79 ± 0.38. With automated data extraction from Epic, manual entry is obviated. ReSKU is a longitudinal prospective nephrolithiasis registry that integrates EHR data, lowering the barriers to performing high quality clinical research and quality outcome assessments in urinary stone disease.

  17. Rationale and Design of the Registry for Stones of the Kidney and Ureter (ReSKU): A Prospective Observational Registry to Study the Natural History of Urolithiasis Patients

    PubMed Central

    Chang, Helena C.; Tzou, David T.; Usawachintachit, Manint; Duty, Brian D.; Hsi, Ryan S.; Harper, Jonathan D.; Sorensen, Mathew D.; Stoller, Marshall L.; Sur, Roger L.

    2016-01-01

    Abstract Objectives: Registry-based clinical research in nephrolithiasis is critical to advancing quality in urinary stone disease management and ultimately reducing stone recurrence. A need exists to develop Health Insurance Portability and Accountability Act (HIPAA)-compliant registries that comprise integrated electronic health record (EHR) data using prospectively defined variables. An EHR-based standardized patient database—the Registry for Stones of the Kidney and Ureter (ReSKU™)—was developed, and herein we describe our implementation outcomes. Materials and Methods: Interviews with academic and community endourologists in the United States, Canada, China, and Japan identified demographic, intraoperative, and perioperative variables to populate our registry. Variables were incorporated into a HIPAA-compliant Research Electronic Data Capture database linked to text prompts and registration data within the Epic EHR platform. Specific data collection instruments supporting New patient, Surgery, Postoperative, and Follow-up clinical encounters were created within Epic to facilitate automated data extraction into ReSKU. Results: The number of variables within each instrument includes the following: New patient—60, Surgery—80, Postoperative—64, and Follow-up—64. With manual data entry, the mean times to complete each of the clinic-based instruments were (minutes) as follows: New patient—12.06 ± 2.30, Postoperative—7.18 ± 1.02, and Follow-up—8.10 ± 0.58. These times were significantly reduced with the use of ReSKU structured clinic note templates to the following: New patient—4.09 ± 1.73, Postoperative—1.41 ± 0.41, and Follow-up—0.79 ± 0.38. With automated data extraction from Epic, manual entry is obviated. Conclusions: ReSKU is a longitudinal prospective nephrolithiasis registry that integrates EHR data, lowering the barriers to performing high quality clinical research and quality outcome assessments in urinary stone disease. PMID:27758162

  18. A framework to assess the impacts of climate change on stream health indicators in Michigan watersheds

    NASA Astrophysics Data System (ADS)

    Woznicki, S. A.; Nejadhashemi, A. P.; Tang, Y.; Wang, L.

    2016-12-01

    Climate change is projected to alter watershed hydrology and potentially amplify nonpoint source pollution transport. These changes have implications for fish and macroinvertebrates, which are often used as measures of aquatic ecosystem health. By quantifying the risk of adverse impacts to aquatic ecosystem health at the reach-scale, watershed climate change adaptation strategies can be developed and prioritized. The objective of this research was to quantify the impacts of climate change on stream health in seven Michigan watersheds. A process-based watershed model, the Soil and Water Assessment Tool (SWAT), was linked to adaptive neuro-fuzzy inferenced (ANFIS) stream health models. SWAT models were used to simulate reach-scale flow regime (magnitude, frequency, timing, duration, and rate of change) and water quality variables. The ANFIS models were developed based on relationships between the in-stream variables and sampling points of four stream health indicators: the fish index of biotic integrity (IBI), macroinvertebrate family index of biotic integrity (FIBI), Hilsenhoff biotic index (HBI), and number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa. The combined SWAT-ANFIS models extended stream health predictions to all watershed reaches. A climate model ensemble from the Coupled Model Intercomparison Project Phase 5 (CMIP5) was used to develop projections of changes to flow regime (using SWAT) and stream health indicators (using ANFIS) from a baseline of 1980-2000 to 2020-2040. Flow regime variables representing variability, duration of extreme events, and timing of low and high flow events were sensitive to changes in climate. The stream health indicators were relatively insensitive to changing climate at the watershed scale. However, there were many instances of individual reaches that were projected to experience declines in stream health. Using the probability of stream health decline coupled with the magnitude of the decline, maps of vulnerable stream ecosystems were developed, which can be used in the watershed management decision-making process.

  19. Separation of atmospheric, oceanic and hydrological polar motion excitation mechanisms based on a combination of geometric and gravimetric space observations

    NASA Astrophysics Data System (ADS)

    Göttl, F.; Schmidt, M.; Seitz, F.; Bloßfeld, M.

    2015-04-01

    The goal of our study is to determine accurate time series of geophysical Earth rotation excitations to learn more about global dynamic processes in the Earth system. For this purpose, we developed an adjustment model which allows to combine precise observations from space geodetic observation systems, such as Satellite Laser Ranging (SLR), Global Navigation Satellite Systems, Very Long Baseline Interferometry, Doppler Orbit determination and Radiopositioning Integrated on Satellite, satellite altimetry and satellite gravimetry in order to separate geophysical excitation mechanisms of Earth rotation. Three polar motion time series are applied to derive the polar motion excitation functions (integral effect). Furthermore we use five time variable gravity field solutions from Gravity Recovery and Climate Experiment to determine not only the integral mass effect but also the oceanic and hydrological mass effects by applying suitable filter techniques and a land-ocean mask. For comparison the integral mass effect is also derived from degree 2 potential coefficients that are estimated from SLR observations. The oceanic mass effect is also determined from sea level anomalies observed by satellite altimetry by reducing the steric sea level anomalies derived from temperature and salinity fields of the oceans. Due to the combination of all geodetic estimated excitations the weaknesses of the individual processing strategies can be reduced and the technique-specific strengths can be accounted for. The formal errors of the adjusted geodetic solutions are smaller than the RMS differences of the geophysical model solutions. The improved excitation time series can be used to improve the geophysical modeling.

  20. ICT Integration of Turkish Teachers: An Analysis within TPACK-Practical Model

    ERIC Educational Resources Information Center

    Ay, Yusuf; Karadag, Engin; Acat, M. Bahaddin

    2016-01-01

    The aim of the study is to analyze Information and Communication Technologies (ICT) integration of Turkish teachers using various variables within the context of Technological Pedagogical Content Knowledge (TPACK). These variables were indicated as the gender of teachers, the implementation status of FATIH project at their schools, school types…

Top