Sample records for moving average processes

  1. Quantified moving average strategy of crude oil futures market based on fuzzy logic rules and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing

    2017-09-01

    The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.

  2. Forecasting Instability Indicators in the Horn of Africa

    DTIC Science & Technology

    2008-03-01

    further than 2 (Makridakis, et al, 1983, 359). 2-32 Autoregressive Integrated Moving Average ( ARIMA ) Model . Similar to the ARMA model except for...stationary process. ARIMA models are described as ARIMA (p,d,q), where p is the order of the autoregressive process, d is the degree of the...differential process, and q is the order of the moving average process. The ARMA (1,1) model shown above is equivalent to an ARIMA (1,0,1) model . An ARIMA

  3. The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Pituch, Keenan A.

    2009-01-01

    The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…

  4. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    PubMed

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  5. Relationship research between meteorological disasters and stock markets based on a multifractal detrending moving average algorithm

    NASA Astrophysics Data System (ADS)

    Li, Qingchen; Cao, Guangxi; Xu, Wei

    2018-01-01

    Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.

  6. MARD—A moving average rose diagram application for the geosciences

    NASA Astrophysics Data System (ADS)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  7. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    NASA Astrophysics Data System (ADS)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  8. SM91: Observations of interchange between acceleration and thermalization processes in auroral electrons

    NASA Technical Reports Server (NTRS)

    Pongratz, M.

    1972-01-01

    Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.

  9. Quantifying rapid changes in cardiovascular state with a moving ensemble average.

    PubMed

    Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T

    2018-04-01

    MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.

  10. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  11. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  12. PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.

  13. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  14. Maximum likelihood estimation for periodic autoregressive moving average models

    USGS Publications Warehouse

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  15. Structural Equation Modeling of Multivariate Time Series

    ERIC Educational Resources Information Center

    du Toit, Stephen H. C.; Browne, Michael W.

    2007-01-01

    The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…

  16. Operational Control Procedures for the Activated Sludge Process: Appendix.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This document is the appendix for a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Categories discussed include: control test data, trend charts, moving averages, semi-logarithmic plots, probability…

  17. KARMA4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Salloum, Maher; Lee, Jina

    2017-07-10

    KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.

  18. A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center

    DTIC Science & Technology

    2009-06-01

    use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for

  19. Monthly streamflow forecasting with auto-regressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  20. TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach.

    PubMed

    Elgendi, Mohamed

    2016-11-02

    Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages ("TERMA") involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages ( W 1 and W 2 ) have to follow the inequality ( 8 × W 1 ) ≥ W 2 ≥ ( 2 × W 1 ) . Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions.

  1. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  2. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  3. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  4. Multifractal detrending moving-average cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2011-07-01

    There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.

  5. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  6. Low-Rank Matrix Recovery Approach for Clutter Rejection in Real-Time IR-UWB Radar-Based Moving Target Detection

    PubMed Central

    Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom

    2016-01-01

    The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159

  7. Robust Semi-Active Ride Control under Stochastic Excitation

    DTIC Science & Technology

    2014-01-01

    broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside

  8. TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach

    PubMed Central

    Elgendi, Mohamed

    2016-01-01

    Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages (“TERMA”) involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages (W1 and W2) have to follow the inequality (8×W1)≥W2≥(2×W1). Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions. PMID:27827852

  9. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  10. Evaluating and improving count-based population inference: A case study from 31 years of monitoring Sandhill Cranes

    USGS Publications Warehouse

    Gerber, Brian D.; Kendall, William L.

    2017-01-01

    Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.

  11. Dimensional processing of composite materials by picosecond pulsed ytterbium fiber laser

    NASA Astrophysics Data System (ADS)

    Kotov, S. A.

    2017-12-01

    In this paper, an experimental study of laser dimensional processing of thermoset carbon fiber reinforced plastics with a thickness of 2 and 3 mm was performed. In the process of work test rig setup based on picosecond pulsed fiber laser with 1.06 microns wavelength and 30 W average power was developed. Experimental tests were carried out at the maximum average power, with laser beam moved by a galvanometric mirrors system. Cutting tests were executed with different scanning velocity, using different laser modes, number of repetitions, hatching distance and focal plane position without process gas. As a result of the research recommendations for the selection processing mode parameters, providing minimal heat affected zone, good kerf geometry and high cutting speed were produced.

  12. Defense Applications of Signal Processing

    DTIC Science & Technology

    1999-08-27

    class of multiscale autoregressive moving average (MARMA) processes. These are generalisations of ARMA models in time series analysis , and they contain...including the two theoretical sinusoidal components. Analysis of the amplitude and frequency time series provided some novel insight into the real...communication channels, underwater acoustic signals, radar systems , economic time series and biomedical signals [7]. The alpha stable (aS) distribution has

  13. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  14. Integrating WEPP into the WEPS infrastructure

    USDA-ARS?s Scientific Manuscript database

    The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...

  15. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  16. 25 CFR 700.173 - Average net earnings of business or farm.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...

  17. A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Kahya, Ercan

    2017-06-01

    Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.

  18. Identification of moving vehicle forces on bridge structures via moving average Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin

    2017-08-01

    Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.

  19. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.

    PubMed

    Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.

  20. Moving Average Models with Bivariate Exponential and Geometric Distributions.

    DTIC Science & Technology

    1985-03-01

    ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28

  1. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging

    NASA Astrophysics Data System (ADS)

    Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.

    2015-01-01

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  2. Use of the temporal median and trimmed mean mitigates effects of respiratory motion in multiple-acquisition abdominal diffusion imaging.

    PubMed

    Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J

    2015-01-21

    Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.

  3. Enhancement of the Comb Filtering Selectivity Using Iterative Moving Average for Periodic Waveform and Harmonic Elimination

    PubMed Central

    Wu, Yan; Aarts, Ronald M.

    2018-01-01

    A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955

  4. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  5. Study of Liquid Breakup Process in Solid Rocket Motors

    DTIC Science & Technology

    2014-01-01

    waves. The breakup level increases with the surrounding gas velocity; more liquid breakup in the nozzle throat reduces the liquid alumina droplet size...process of a liquid film that flows along the wall of a straight channel while a high-speed gas moves over it. We have used an unsteady-flow Reynolds...Averaged Navier-Stokes code (URANS) to investigate the interaction of the liquid film flow with the gas flow, and analyzed the breakup process for

  6. Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA

    NASA Astrophysics Data System (ADS)

    Montillet, Jean-Philippe; Yu, Kegen

    2015-04-01

    Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).

  7. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  8. Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies

    PubMed Central

    2016-01-01

    The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA′) in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA′. The Efficacy Ratio adjusts the AMA′ to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA′ is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA′ are superior to the passive buy-and-hold strategy. Specifically, AMA′ outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets. PMID:27574972

  9. Timescale Halo: Average-Speed Targets Elicit More Positive and Less Negative Attributions than Slow or Fast Targets

    PubMed Central

    Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin

    2014-01-01

    Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882

  10. Online tracking of instantaneous frequency and amplitude of dynamical system response

    NASA Astrophysics Data System (ADS)

    Frank Pai, P.

    2010-05-01

    This paper presents a sliding-window tracking (SWT) method for accurate tracking of the instantaneous frequency and amplitude of arbitrary dynamic response by processing only three (or more) most recent data points. Teager-Kaiser algorithm (TKA) is a well-known four-point method for online tracking of frequency and amplitude. Because finite difference is used in TKA, its accuracy is easily destroyed by measurement and/or signal-processing noise. Moreover, because TKA assumes the processed signal to be a pure harmonic, any moving average in the signal can destroy the accuracy of TKA. On the other hand, because SWT uses a constant and a pair of windowed regular harmonics to fit the data and estimate the instantaneous frequency and amplitude, the influence of any moving average is eliminated. Moreover, noise filtering is an implicit capability of SWT when more than three data points are used, and this capability increases with the number of processed data points. To compare the accuracy of SWT and TKA, Hilbert-Huang transform is used to extract accurate time-varying frequencies and amplitudes by processing the whole data set without assuming the signal to be harmonic. Frequency and amplitude trackings of different amplitude- and frequency-modulated signals, vibrato in music, and nonlinear stationary and non-stationary dynamic signals are studied. Results show that SWT is more accurate, robust, and versatile than TKA for online tracking of frequency and amplitude.

  11. Examination of the Armagh Observatory Annual Mean Temperature Record, 1844-2004

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2006-01-01

    The long-term annual mean temperature record (1844-2004) of the Armagh Observatory (Armagh, Northern Ireland, United Kingdom) is examined for evidence of systematic variation, in particular, as related to solar/geomagnetic forcing and secular variation. Indeed, both are apparent in the temperature record. Moving averages for 10 years of temperature are found to highly correlate against both 10-year moving averages of the aa-geomagnetic index and sunspot number, having correlation coefficients of approx. 0.7, inferring that nearly half the variance in the 10-year moving average of temperature can be explained by solar/geomagnetic forcing. The residuals appear episodic in nature, with cooling seen in the 1880s and again near 1980. Seven of the last 10 years of the temperature record has exceeded 10 C, unprecedented in the overall record. Variation of sunspot cyclic averages and 2-cycle moving averages of temperature strongly associate with similar averages for the solar/geomagnetic cycle, with the residuals displaying an apparent 9-cycle variation and a steep rise in temperature associated with cycle 23. Hale cycle averages of temperature for even-odd pairs of sunspot cycles correlate against similar averages for the solar/geomagnetic cycle and, especially, against the length of the Hale cycle. Indications are that annual mean temperature will likely exceed 10 C over the next decade.

  12. Using a traffic simulation model (VISSIM) with an emissions model (MOVES) to predict emissions from vehicles on a limited-access highway.

    PubMed

    Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David

    2013-07-01

    The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.

  13. Time Series in Education: The Analysis of Daily Attendance in Two High Schools

    ERIC Educational Resources Information Center

    Koopmans, Matthijs

    2011-01-01

    This presentation discusses the use of a time series approach to the analysis of daily attendance in two urban high schools over the course of one school year (2009-10). After establishing that the series for both schools were stationary, they were examined for moving average processes, autoregression, seasonal dependencies (weekly cycles),…

  14. Forecasting coconut production in the Philippines with ARIMA model

    NASA Astrophysics Data System (ADS)

    Lim, Cristina Teresa

    2015-02-01

    The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.

  15. THE VELOCITY DISTRIBUTION OF NEARBY STARS FROM HIPPARCOS DATA. II. THE NATURE OF THE LOW-VELOCITY MOVING GROUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bovy, Jo; Hogg, David W., E-mail: jo.bovy@nyu.ed

    2010-07-10

    The velocity distribution of nearby stars ({approx}<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymousmore » open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.« less

  16. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  17. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2017-06-29

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  18. On the Relationship between Solar Wind Speed, Earthward-Directed Coronal Mass Ejections, Geomagnetic Activity, and the Sunspot Cycle Using 12-Month Moving Averages

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.

  19. Application of image processing to calculate the number of fish seeds using raspberry-pi

    NASA Astrophysics Data System (ADS)

    Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.

    2018-03-01

    Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.

  20. A landslide-quake detection algorithm with STA/LTA and diagnostic functions of moving average and scintillation index: A preliminary case study of the 2009 Typhoon Morakot in Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yu-Jie; Lin, Guan-Wei

    2017-04-01

    Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.

  1. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  2. Viscous Torques on a Levitating Body

    NASA Technical Reports Server (NTRS)

    Busse, F.; Wang, T.

    1982-01-01

    New analytical expressions for viscous torque generated by orthogonal sound waves agree well with experiment. It is possible to calculate torque on an object levitated in a fluid. Levitation has applications in containerless materials processing, coating, and fabrication of small precision parts. Sound waves cause fluid particles to move in elliptical paths and induce azimuthal circulation in boundary layer, giving rise to time-averaged torque.

  3. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network

    PubMed Central

    Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450

  4. Intelligent Soft Computing on Forex: Exchange Rates Forecasting with Hybrid Radial Basis Neural Network.

    PubMed

    Falat, Lukas; Marcek, Dusan; Durisova, Maria

    2016-01-01

    This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.

  5. The application of moving bed biofilm reactor to denitrification process after trickling filters.

    PubMed

    Kopec, Lukasz; Drewnowski, Jakub; Kopec, Adam

    2016-12-01

    The paper presents research of a prototype moving bed biofilm reactor (MBBR). The device was used for the post-denitrification process and was installed at the end of a technological system consisting of a septic tank and two trickling filters. The concentrations of suspended biomass and biomass attached on the EvU Perl moving bed surface were determined. The impact of the external organic carbon concentration on the denitrification rate and efficiency of total nitrogen removal was also examined. The study showed that the greater part of the biomass was in the suspended form and only 6% of the total biomass was attached to the surface of the moving bed. Abrasion forces between carriers of the moving bed caused the fast stripping of attached microorganisms and formation of flocs. Thanks to immobilization of a small amount of biomass, the MBBR was less prone to leaching of the biomass and the occurrence of scum and swelling sludge. It was revealed that the maximum rate of denitrification was an average of 0.73 gN-NO 3 /gDM·d (DM: dry matter), and was achieved when the reactor was maintained in external organic carbon concentration exceeding 300 mgO 2 /dm 3 chemical oxygen demand. The reactor proved to be an effective device enabling the increase of total nitrogen removal from 53.5% to 86.0%.

  6. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.

    PubMed

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-12-11

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.

  7. Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow

    NASA Astrophysics Data System (ADS)

    Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen

    2015-11-01

    The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.

  8. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  9. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  10. 78 FR 26879 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-08

    ...: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Proposed rule. SUMMARY: This proposed rule..., especially the teaching status adjustment factor. Therefore, we implemented a 3-year moving average approach... moving average to calculate the facility-level adjustment factors. For FY 2011, we issued a notice to...

  11. Space trajectory calculation based on G-sensor

    NASA Astrophysics Data System (ADS)

    Xu, Biya; Zhan, Yinwei; Shao, Yang

    2017-08-01

    At present, without full use of the mobile phone around us, most of the research in human body posture recognition field is use camera or portable acceleration sensor to collect data. In this paper, G-sensor built-in mobile phone is use to collect data. After processing data with the way of moving average filter and acceleration integral, joint point's space three-dimensional coordinates can be abtained accurately.

  12. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  13. Geophysical Factor Resolving of Rainfall Mechanism for Super Typhoons by Using Multiple Spatiotemporal Components Analysis

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Lin; Hsu, Nien-Sheng

    2016-04-01

    This study develops a novel methodology to resolve the geophysical cause of typhoon-induced rainfall considering diverse dynamic co-evolution at multiple spatiotemporal components. The multi-order hidden patterns of complex hydrological process in chaos are detected to understand the fundamental laws of rainfall mechanism. The discovered spatiotemporal features are utilized to develop a state-of-the-art descriptive statistical model for mechanism validation, modeling and further prediction during typhoons. The time series of hourly typhoon precipitation from different types of moving track, atmospheric field and landforms are respectively precede the signal analytical process to qualify each type of rainfall cause and to quantify the corresponding affected degree based on the measured geophysical atmospheric-hydrological variables. This study applies the developed methodology in Taiwan Island which is constituted by complex diverse landform formation. The identified driving-causes include: (1) cloud height to ground surface; (2) co-movement effect induced by typhoon wind field with monsoon; (3) stem capacity; (4) interaction between typhoon rain band and terrain; (5) structural intensity variance of typhoon; and (6) integrated cloudy density of rain band. Results show that: (1) for the central maximum wind speed exceeding 51 m/sec, Causes (1) and (3) are the primary ones to generate rainfall; (2) for the typhoon moving toward the direction of 155° to 175°, Cause (2) is the primary one; (3) for the direction of 90° to 155°, Cause (4) is the primary one; (4) for the typhoon passing through mountain chain which above 3500 m, Cause (5) is the primary one; and (5) for the moving speed lower than 18 km/hr, Cause (6) is the primary one. Besides, the multiple geophysical component-based precipitation modeling can achieve 81% of average accuracy and 0.732 of average correlation coefficient (CC) within average 46 hours of duration, that improve their predictability.

  14. Correcting for day of the week and public holiday effects: improving a national daily syndromic surveillance service for detecting public health threats.

    PubMed

    Buckingham-Jeffery, Elizabeth; Morbey, Roger; House, Thomas; Elliot, Alex J; Harcourt, Sally; Smith, Gillian E

    2017-05-19

    As service provision and patient behaviour varies by day, healthcare data used for public health surveillance can exhibit large day of the week effects. These regular effects are further complicated by the impact of public holidays. Real-time syndromic surveillance requires the daily analysis of a range of healthcare data sources, including family doctor consultations (called general practitioners, or GPs, in the UK). Failure to adjust for such reporting biases during analysis of syndromic GP surveillance data could lead to misinterpretations including false alarms or delays in the detection of outbreaks. The simplest smoothing method to remove a day of the week effect from daily time series data is a 7-day moving average. Public Health England developed the working day moving average in an attempt also to remove public holiday effects from daily GP data. However, neither of these methods adequately account for the combination of day of the week and public holiday effects. The extended working day moving average was developed. This is a further data-driven method for adding a smooth trend curve to a time series graph of daily healthcare data, that aims to take both public holiday and day of the week effects into account. It is based on the assumption that the number of people seeking healthcare services is a combination of illness levels/severity and the ability or desire of patients to seek healthcare each day. The extended working day moving average was compared to the seven-day and working day moving averages through application to data from two syndromic indicators from the GP in-hours syndromic surveillance system managed by Public Health England. The extended working day moving average successfully smoothed the syndromic healthcare data by taking into account the combined day of the week and public holiday effects. In comparison, the seven-day and working day moving averages were unable to account for all these effects, which led to misleading smoothing curves. The results from this study make it possible to identify trends and unusual activity in syndromic surveillance data from GP services in real-time independently of the effects caused by day of the week and public holidays, thereby improving the public health action resulting from the analysis of these data.

  15. Moving in the Right Direction: Helping Children Cope with a Relocation

    ERIC Educational Resources Information Center

    Kruse, Tricia

    2012-01-01

    According to national figures, 37.1 million people moved in 2009 (U.S. Census Bureau, 2010). In fact, the average American will move 11.7 times in their lifetime. Why are Americans moving so much? There are a variety of reasons. Regardless of the reason, moving is a common experience for children. If one looks at the developmental characteristics…

  16. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  17. Interplay between river dynamics and international borders: the Hirmand River between Iran and Afghanistan

    NASA Astrophysics Data System (ADS)

    Yousefi, Saleh; Keesstra, Saskia; Pourghasemi, Hamid Reza; Surian, Nicola; Mirzaee, Somayeh

    2017-04-01

    Fluvial dynamics in riverine borders can play an important role in political relationships between countries. Rivers move and evolve under the influence of natural processes and external drivers (e.g. land use change in river catchments). The Hirmand River is an important riverine border between Iran and Afghanistan. The present study shows the evolution and lateral shifting of the Hirmand River along the common international border (25.6 km) over a period of 6 decades (1955-2015). Seven data series of aerial photos, topographic maps and Landsat images were used to identify the land cover and morphological changes in the study reach. The land cover has changed dramatically on both sides of the border during the last 6 decades, especially in the Afghan part. Overall, 49% of all land surface changed its cover type, especially the area of agriculture and residential land contributed to that, with an increase in surface area of about 4931 ha and 561 ha, respectively. On the other hand, the natural cover and water bodies decreased to 38 % and 63 %, respectively. The impact of these land use changes on the morphological evolution of Hirmand River was investigated in 5 sub-reaches. We found an average decrease of the active channel width of 53% during 60 years and the average River Network Change Index for the whole study reach during 60 years was -1.25 m/yr. Deposition and narrowing turned out to be the main processes occurring within the study reach. Furthermore, due to natural riverine processes the Hirmand River has moved towards Afghanistan (37 m on average) and lateral shifting was found to be up to 1900 m in some sections.

  18. A comparison of several techniques for imputing tree level data

    Treesearch

    David Gartner

    2002-01-01

    As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...

  19. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  20. An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones

    PubMed Central

    Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han

    2015-01-01

    Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447

  1. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  2. Using Baidu Search Index to Predict Dengue Outbreak in China

    NASA Astrophysics Data System (ADS)

    Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai

    2016-12-01

    This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.

  3. Comparison between wavelet transform and moving average as filter method of MODIS imagery to recognize paddy cropping pattern in West Java

    NASA Astrophysics Data System (ADS)

    Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi

    2017-01-01

    Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.

  4. Parameter interdependence and uncertainty induced by lumping in a hydrologic model

    NASA Astrophysics Data System (ADS)

    Gallagher, Mark R.; Doherty, John

    2007-05-01

    Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.

  5. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  6. Incorporating pushing in exclusion-process models of cell migration.

    PubMed

    Yates, Christian A; Parker, Andrew; Baker, Ruth E

    2015-05-01

    The macroscale movement behavior of a wide range of isolated migrating cells has been well characterized experimentally. Recently, attention has turned to understanding the behavior of cells in crowded environments. In such scenarios it is possible for cells to interact, inducing neighboring cells to move in order to make room for their own movements or progeny. Although the behavior of interacting cells has been modeled extensively through volume-exclusion processes, few models, thus far, have explicitly accounted for the ability of cells to actively displace each other in order to create space for themselves. In this work we consider both on- and off-lattice volume-exclusion position-jump processes in which cells are explicitly allowed to induce movements in their near neighbors in order to create space for themselves to move or proliferate into. We refer to this behavior as pushing. From these simple individual-level representations we derive continuum partial differential equations for the average occupancy of the domain. We find that, for limited amounts of pushing, comparison between the averaged individual-level simulations and the population-level model is nearly as good as in the scenario without pushing. Interestingly, we find that, in the on-lattice case, the diffusion coefficient of the population-level model is increased by pushing, whereas, for the particular off-lattice model that we investigate, the diffusion coefficient is reduced. We conclude, therefore, that it is important to consider carefully the appropriate individual-level model to use when representing complex cell-cell interactions such as pushing.

  7. Perceptions and Efficacy of Flight Operational Quality Assurance (FOQA) Programs Among Small-scale Operators

    DTIC Science & Technology

    2012-01-01

    regressive Integrated Moving Average ( ARIMA ) model for the data, eliminating the need to identify an appropriate model through trial and error alone...06 .11 13.67 16 .62 16 .14 .11 8.06 16 .95 * Based on the asymptotic chi-square approximation. 8 In general, ARIMA models address three...performance standards and measurement processes and a prevailing climate of organizational trust were important factors. Unfortunately, uneven

  8. Energy Forecasting Models Within the Department of the Navy.

    DTIC Science & Technology

    1982-06-01

    standing the climatic conditions responsible for the results. Both models have particular advantages in parti- cular applications and will be examined...and moving average processes. A similar notation for a model with seasonality . .- considerations will be ARIMA (p d j)(P Q) 3=12, where the upper...AD-A12l 950 ENERGY FORECASTING MODELS WITHIN THE DEPARTMENT OF THE 1/4 NAYY(U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA L &I BUTTOIPH JUN 82

  9. Ambient temperature and biomarkers of heart failure: a repeated measures analysis.

    PubMed

    Wilker, Elissa H; Yeh, Gloria; Wellenius, Gregory A; Davis, Roger B; Phillips, Russell S; Mittleman, Murray A

    2012-08-01

    Extreme temperatures have been associated with hospitalization and death among individuals with heart failure, but few studies have explored the underlying mechanisms. We hypothesized that outdoor temperature in the Boston, Massachusetts, area (1- to 4-day moving averages) would be associated with higher levels of biomarkers of inflammation and myocyte injury in a repeated-measures study of individuals with stable heart failure. We analyzed data from a completed clinical trial that randomized 100 patients to 12 weeks of tai chi classes or to time-matched education control. B-type natriuretic peptide (BNP), C-reactive protein (CRP), and tumor necrosis factor (TNF) were measured at baseline, 6 weeks, and 12 weeks. Endothelin-1 was measured at baseline and 12 weeks. We used fixed effects models to evaluate associations with measures of temperature that were adjusted for time-varying covariates. Higher apparent temperature was associated with higher levels of BNP beginning with 2-day moving averages and reached statistical significance for 3- and 4-day moving averages. CRP results followed a similar pattern but were delayed by 1 day. A 5°C change in 3- and 4-day moving averages of apparent temperature was associated with 11.3% [95% confidence interval (CI): 1.1, 22.5; p = 0.03) and 11.4% (95% CI: 1.2, 22.5; p = 0.03) higher BNP. A 5°C change in the 4-day moving average of apparent temperature was associated with 21.6% (95% CI: 2.5, 44.2; p = 0.03) higher CRP. No clear associations with TNF or endothelin-1 were observed. Among patients undergoing treatment for heart failure, we observed positive associations between temperature and both BNP and CRP-predictors of heart failure prognosis and severity.

  10. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process.

    PubMed

    Moran, John L; Solomon, Patricia J

    2013-05-24

    Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.

  11. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  12. Permeation of limonene through disposable nitrile gloves using a dextrous robot hand

    PubMed Central

    Banaee, Sean; S Que Hee, Shane

    2017-01-01

    Objectives: The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). Methods: This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). Results: The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm2/min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Conclusions: Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving. PMID:28111415

  13. Neural net forecasting for geomagnetic activity

    NASA Technical Reports Server (NTRS)

    Hernandez, J. V.; Tajima, T.; Horton, W.

    1993-01-01

    We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).

  14. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  15. Models for short term malaria prediction in Sri Lanka

    PubMed Central

    Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H

    2008-01-01

    Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204

  16. Queues with Choice via Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth

    Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.

  17. Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.

    PubMed

    Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat

    2014-01-01

    The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.

  18. Model-based design of an intermittent simulated moving bed process for recovering lactic acid from ternary mixture.

    PubMed

    Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan

    2018-08-10

    An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Automated digital magnetofluidics

    NASA Astrophysics Data System (ADS)

    Schneider, J.; Garcia, A. A.; Marquez, M.

    2008-08-01

    Drops can be moved in complex patterns on superhydrophobic surfaces using a reconfigured computer-controlled x-y metrology stage with a high degree of accuracy, flexibility, and reconfigurability. The stage employs a DMC-4030 controller which has a RISC-based, clock multiplying processor with DSP functions, accepting encoder inputs up to 22 MHz, provides servo update rates as high as 32 kHz, and processes commands at rates as fast as 40 milliseconds. A 6.35 mm diameter cylindrical NdFeB magnet is translated by the stage causing water drops to move by the action of induced magnetization of coated iron microspheres that remain in the drop and are attracted to the rare earth magnet through digital magnetofluidics. Water drops are easily moved in complex patterns in automated digital magnetofluidics at an average speed of 2.8 cm/s over a superhydrophobic polyethylene surface created by solvent casting. With additional components, some potential uses for this automated microfluidic system include characterization of superhydrophobic surfaces, water quality analysis, and medical diagnostics.

  20. Monitoring Poisson observations using combined applications of Shewhart and EWMA charts

    NASA Astrophysics Data System (ADS)

    Abujiya, Mu'azu Ramat

    2017-11-01

    The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.

  1. Flow speed of the ablation vapors generated during laser drilling of CFRP with a continuous-wave laser beam

    NASA Astrophysics Data System (ADS)

    Faas, S.; Freitag, C.; Boley, S.; Berger, P.; Weber, R.; Graf, T.

    2017-03-01

    The hot plume of ablation products generated during the laser drilling process of carbon fiber reinforced plastics (CFRP) with a continuous-wave laser beam was analyzed by means of high-speed imaging. The formation of compression shocks was observed within the flow of the evaporated material, which is an indication of flow speeds well above the local speed of sound. The flow speed of the hot ablation products can be estimated by analyzing the position of these compression shocks. We investigated the temporal evolution of the flow speed during the drilling process and the influence of the average laser power on the flow speed. The flow speed increases with increasing average laser powers. The moment of drilling through the material changes the conditions for the drilling process and was confirmed to influence the flow speed of the ablated material. Compression shocks can also be observed during laser cutting of CFRP with a moving laser beam.

  2. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  3. EFFECTS OF LASER RADIATION ON MATTER. LASER PLASMA: Emission of charged particles from the surface of a moving target acted on by cw CO2 laser radiation

    NASA Astrophysics Data System (ADS)

    Kuznetsov, S. I.; Petrov, A. L.; Shadrin, A. N.

    1990-06-01

    An experimental investigation was made of the emission of charged particles due to the irradiation of moving steel and graphite targets with cw CO2 laser radiation. The characteristics of the emission current signals were determined for different laser irradiation regimes. The maximum emission current density from the surface of a melt pool ( ~ 1.1 × 10 - 2 A/cm2) and the average temperature of the liquid metal (~ 2040 K) were measured for an incident radiation power density of 550 W and for horizontal and vertical target velocities of respectively ~ 1.5 mm/s and ~ 0.17 mm/s. The authors propose to utilize this phenomenon for monitoring the laser processing of materials.

  4. SU-C-204-06: Surface Imaging for the Set-Up of Proton Post-Mastectomy Chestwall Irradiation: Gated Images Vs Non Gated Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Batin, E; Depauw, N; MacDonald, S

    Purpose: Historically, the set-up for proton post-mastectomy chestwall irradiation at our institution started with positioning the patient using tattoos and lasers. One or more rounds of orthogonal X-rays at gantry 0° and beamline X-ray at treatment gantry angle were then taken to finalize the set-up position. As chestwall targets are shallow and superficial, surface imaging is a promising tool for set-up and needs to be investigated Methods: The orthogonal imaging was entirely replaced by AlignRT™ (ART) images. The beamline X-Ray image is kept as a confirmation, based primarily on three opaque markers placed on skin surface instead of bony anatomy.more » In the first phase of the process, ART gated images were used to set-up the patient and the same specific point of the breathing curve was used every day. The moves (translations and rotations) computed for each point of the breathing curve during the first five fractions were analyzed for ten patients. During a second phase of the study, ART gated images were replaced by ART non-gated images combined with real-time monitoring. In both cases, ART images were acquired just before treatment to access the patient position compare to the non-gated CT. Results: The average difference between the maximum move and the minimum move depending on the chosen breathing curve point was less than 1.7 mm for all translations and less than 0.7° for all rotations. The average position discrepancy over the course of treatment obtained by ART non gated images combined to real-time monitoring taken before treatment to the planning CT were smaller than the average position discrepancy obtained using ART gated images. The X-Ray validation images show similar results with both ART imaging process. Conclusion: The use of ART non gated images combined with real time imaging allows positioning post-mastectomy chestwall patients in less than 3 mm / 1°.« less

  5. Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…

  6. Suitability of a structured Fundamental Movement Skills program for long day care centres: a process evaluation.

    PubMed

    Petrunoff, Nick; Lloyd, Beverley; Watson, Natalie; Morrisey, David

    2009-04-01

    Early childhood presents an opportunity to encourage development of Fundamental Movement Skills (FMS). Implementation of a structured program in the Long Day Care (LDC) setting presents challenges. Implementation of a structured FMS program FunMoves was assessed in LDC in metropolitan New South Wales. LDC staff attended a training session conducted by trained Health Promotion Officers (HPOs) and completed an evaluation. During implementation HPOs completed lesson observations. De-identified attendance data was collected and director and staff feedback on the program including barriers to implementation was obtained via questionnaire. Qualitative information relevant to process evaluation was obtained via open questions on questionnaires, and a de-brief diary recording feedback from directors and staff. Knowledge of FMS and FunMoves and staff confidence to deliver the program were high after training. On average, staff stated they ran lessons more than the suggested twice weekly and the majority of children attended 1-3 lessons per week. However, lesson delivery was not as designed, and staff found FunMoves disruptive and time consuming. Six directors and the majority of staff thought that FunMoves could be improved. Structured program delivery was hampered by contextual issues including significant staff turnover and program length and structure being at odds with the setting. Implementation could be enhanced by guidelines for more flexible delivery options including less structured approaches, shorter and simpler lessons, ongoing conversations with the early childhood sector, in-centre engagement of staff and post-training support.

  7. Intake flow modeling in a four stroke diesel using KIVA3

    NASA Technical Reports Server (NTRS)

    Hessel, R. P.; Rutland, C. J.

    1993-01-01

    Intake flow for a dual intake valved diesel engine is modeled using moving valves and realistic geometries. The objectives are to obtain accurate initial conditions for combustion calculations and to provide a tool for studying intake processes. Global simulation parameters are compared with experimental results and show good agreement. The intake process shows a 30 percent difference in mass flows and average swirl in opposite directions across the two intake valves. The effect of the intake process on the flow field at the end of compression is examined. Modeling the intake flow results in swirl and turbulence characteristics that are quite different from those obtained by conventional methods in which compression stroke initial conditions are assumed.

  8. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869

  9. Increasing the capacity for treatment of chemical plant wastewater by replacing existing suspended carrier media with Kaldnes Moving Bed media at a plant in Singapore.

    PubMed

    Wessman, F G; Yan Yuegen, E; Zheng, Q; He, G; Welander, T; Rusten, B

    2004-01-01

    The Kaldnes biomedia K1, which is used in the patented Kaldnes Moving Bed biofilm process, has been tested along with other types of biofilm carriers for biological pretreatment of a complex chemical industry wastewater. The main objective of the test was to find a biofilm carrier that could replace the existing suspended carrier media and at the same time increase the capacity of the existing roughing filter-activated sludge plant by 20% or more. At volumetric organic loads of 7.1 kg COD/m3/d the Kaldnes Moving Bed process achieved much higher removal rates and much lower effluent concentrations than roughing filters using other carriers. The Kaldnes roughing stage achieved more than 85% removal of organic carbon and more than 90% removal of BOD5 at the tested organic load, which was equivalent to a specific biofilm surface area load of 24 g COD/m2/d. Even for the combined roughing filter-activated sludge process, the Kaldnes carriers outperformed the other carriers, with 98% removal of organic carbon and 99.6% removal of BOD5. The Kaldnes train final effluent concentrations were only 22 mg FOC/L and 7 mg BOD5/L. Based on the successful pilot testing, the full-scale plant was upgraded with Kaldnes Moving Bed roughing filters. During normal operation the upgraded plant has easily met the discharge limits of 100 mg COD/L and 50 mg SS/L. For the month of September 2002, with organic loads between 100 and 115% of the design load for the second half of the month, average effluent concentrations were as low as 9 mg FOC/L, 51 mg COD/L and 12 mg SS/L.

  10. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  11. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  12. Decadal Trends of Atlantic Basin Tropical Cyclones (1950-1999)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Ten-year moving averages of the seasonal rates for 'named storms,' tropical storms, hurricanes, and major (or intense) hurricanes in the Atlantic basin suggest that the present epoch is one of enhanced activity, marked by seasonal rates typically equal to or above respective long-term median rates. As an example, the 10-year moving average of the seasonal rates for named storms is now higher than for any previous year over the past 50 years, measuring 10.65 in 1994, or 2.65 units higher than its median rate of 8. Also, the 10-year moving average for tropical storms has more than doubled, from 2.15 in 1955 to 4.60 in 1992, with 16 of the past 20 years having a seasonal rate of three or more (the median rate). For hurricanes and major hurricanes, their respective 10-year moving averages turned upward, rising above long-term median rates (5.5 and 2, respectively) in 1992, a response to the abrupt increase in seasonal rates that occurred in 1995. Taken together, the outlook for future hurricane seasons is for all categories of Atlantic basin tropical cyclones to have seasonal rates at levels equal to or above long-term median rates, especially during non-El Nino-related seasons. Only during El Nino-related seasons does it appear likely that seasonal rates might be slightly diminished.

  13. Proceedings of the Conference on the Design of Experiments in Army Research Development and Testing (32nd)

    DTIC Science & Technology

    1987-06-01

    number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample

  14. Circadian Rhythms in Plants, Insects and Mammals Exposed to ELF Magnetic and/or Electric Fields and Currents

    DTIC Science & Technology

    1975-08-28

    favorable to the model. Parameter estimates from this fitting process, carried out in the nature of a "moving-average" throughout the cntilre serces of...34OWOLS Pl %%t4)1 uSSvMS~ USA NIWW 162-7-020 r,.6/WEfg 4/R:0 GAUSS.8:O.5 GAUSS.C:I.O GAUSS.D:2.0 GAtJ$ 360 :24 i ONCHRNHEC SCHOUL if .) 75.2 40.0 20

  15. Study of Liquid Breakup Process in Solid Rocket Motor Nozzle

    DTIC Science & Technology

    2016-02-16

    liquid film flow with the gas flow. The rate of the wave breakup was characterized by introducing Breakup-length, Ohnesorge Number (Oh) and Weber Number... liquid film that flows along the wall of a strraight test channel while a relatively higher-speed gas moves over it. We have used an unsteady-flow...Reynolds- Averaged Navier-Stokes code (URANS) to investigate the interaction of the liquid film flow with the gas flow. The rate of the wave breakup was

  16. Integration of social information by human groups

    PubMed Central

    Granovskiy, Boris; Gold, Jason M.; Sumpter, David; Goldstone, Robert L.

    2015-01-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0 and 100% (e.g., ‘What percentage of Americans are left-handed?’). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move towards the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. PMID:26189568

  17. Integration of Social Information by Human Groups.

    PubMed

    Granovskiy, Boris; Gold, Jason M; Sumpter, David J T; Goldstone, Robert L

    2015-07-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. Copyright © 2015 Cognitive Science Society, Inc.

  18. Motile and non-motile sperm diagnostic manipulation using optoelectronic tweezers.

    PubMed

    Ohta, Aaron T; Garcia, Maurice; Valley, Justin K; Banie, Lia; Hsu, Hsan-Yin; Jamshidi, Arash; Neale, Steven L; Lue, Tom; Wu, Ming C

    2010-12-07

    Optoelectronic tweezers was used to manipulate human spermatozoa to determine whether their response to OET predicts sperm viability among non-motile sperm. We review the electro-physical basis for how live and dead human spermatozoa respond to OET. The maximal velocity that non-motile spermatozoa could be induced to move by attraction or repulsion to a moving OET field was measured. Viable sperm are attracted to OET fields and can be induced to move at an average maximal velocity of 8.8 ± 4.2 µm s(-1), while non-viable sperm are repelled to OET, and are induced to move at an average maximal velocity of -0.8 ± 1.0 µm s(-1). Manipulation of the sperm using OET does not appear to result in increased DNA fragmentation, making this a potential method by which to identify viable non-motile sperm for assisted reproductive technologies.

  19. Transport of the moving barrier driven by chiral active particles

    NASA Astrophysics Data System (ADS)

    Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan

    2018-03-01

    Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.

  20. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  1. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  2. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  3. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms.

    PubMed

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  4. Focus on Teacher Salaries: An Update on Average Salaries and Recent Legislative Actions in the SREB States.

    ERIC Educational Resources Information Center

    Gaines, Gale F.

    Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…

  5. Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis

    NASA Astrophysics Data System (ADS)

    Awais, M.; Saleem, S.; Hayat, T.; Irum, S.

    2016-12-01

    This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.

  6. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process

    PubMed Central

    2013-01-01

    Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957

  7. Mechanistic approach to generalized technical analysis of share prices and stock market indices

    NASA Astrophysics Data System (ADS)

    Ausloos, M.; Ivanova, K.

    2002-05-01

    Classical technical analysis methods of stock evolution are recalled, i.e. the notion of moving averages and momentum indicators. The moving averages lead to define death and gold crosses, resistance and support lines. Momentum indicators lead the price trend, thus give signals before the price trend turns over. The classical technical analysis investment strategy is thereby sketched. Next, we present a generalization of these tricks drawing on physical principles, i.e. taking into account not only the price of a stock but also the volume of transactions. The latter becomes a time dependent generalized mass. The notion of pressure, acceleration and force are deduced. A generalized (kinetic) energy is easily defined. It is understood that the momentum indicators take into account the sign of the fluctuations, while the energy is geared toward the absolute value of the fluctuations. They have different patterns which are checked by searching for the crossing points of their respective moving averages. The case of IBM evolution over 1990-2000 is used for illustrations.

  8. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  9. Modeling Seasonal Influenza Transmission and Its Association with Climate Factors in Thailand Using Time-Series and ARIMAX Analyses.

    PubMed

    Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin

    2015-01-01

    Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.

  10. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  11. Traffic dynamics of carnival processions

    NASA Astrophysics Data System (ADS)

    Polichronidis, Petros; Wegerle, Dominik; Dieper, Alexander; Schreckenberg, Michael

    2018-03-01

    The traffic dynamics of processions are described in this study. GPS data from participating groups in the Cologne Rose Monday processions 2014–2017 are used to analyze the kinematic characteristics. The preparation of the measured data requires an adjustment by a specially adapted algorithm for the map matching method. A higher average velocity is observed for the last participant, the Carnival Prince, than for the leading participant of the parade. Based on the results of the data analysis, for the first time a model can be established for defilading parade groups as a modified Nagel-Schreckenberg model. This model can reproduce the observed characteristics in simulations. They can be explained partly by the constantly moving vehicle driving ahead of the parade leaving the pathway and partly due to a spatial contraction of the parade during the procession.

  12. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1995-10-17

    A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  13. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, K.C.; Hoyer, K.K.; Humenik, K.E.

    1997-05-13

    A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.

  14. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1995-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  15. System for monitoring an industrial process and determining sensor status

    DOEpatents

    Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.

    1997-01-01

    A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.

  16. Solute transport and storage mechanisms in wetlands of the Everglades, south Florida

    USGS Publications Warehouse

    Harvey, Judson W.; Saiers, James E.; Newlin, Jessica T.

    2005-01-01

    Solute transport and storage processes in wetlands play an important role in biogeochemical cycling and in wetland water quality functions. In the wetlands of the Everglades, there are few data or guidelines to characterize transport through the heterogeneous flow environment. Our goal was to conduct a tracer study to help quantify solute exchange between the relatively fast flowing water in the open part of the water column and much more slowly moving water in thick floating vegetation and in the pore water of the underlying peat. We performed a tracer experiment that consisted of a constant‐rate injection of a sodium bromide (NaBr) solution for 22 hours into a 3 m wide, open‐ended flume channel in Everglades National Park. Arrival of the bromide tracer was monitored at an array of surface water and subsurface samplers for 48 hours at a distance of 6.8 m downstream of the injection. A one‐dimensional transport model was used in combination with an optimization code to identify the values of transport parameters that best explained the tracer observations. Parameters included dimensions and mass transfer coefficients describing exchange with both short (hours) and longer (tens of hours) storage zones as well as the average rates of advection and longitudinal dispersion in the open part of the water column (referred to as the “main flow zone”). Comparison with a more detailed set of tracer measurements tested how well the model's storage zones approximated the average characteristics of tracer movement into and out of the layer of thick floating vegetation and the pore water in the underlying peat. The rate at which the relatively fast moving water in the open water column was exchanged with slowly moving water in the layer of floating vegetation and in sediment pore water amounted to 50 and 3% h−1, respectively. Storage processes decreased the depth‐averaged velocity of surface water by 50% relative to the water velocity in the open part of the water column. As a result, flow measurements made with other methods that only work in the open part of the water column (e.g., acoustic Doppler) would have overestimated the true depth‐averaged velocity by a factor of 2. We hypothesize that solute exchange and storage in zones of floating vegetation and peat pore water increase contact time of solutes with biogeochemically active surfaces in this heterogeneous wetland environment.

  17. The impact of using weight estimated from mammographic images vs. self-reported weight on breast cancer risk calculation

    NASA Astrophysics Data System (ADS)

    Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.

    2017-03-01

    Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.

  18. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  19. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  20. Heterodyne laser Doppler distance sensor with phase coding measuring stationary as well as laterally and axially moving objects

    NASA Astrophysics Data System (ADS)

    Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.

    2010-02-01

    Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.

  1. Dynamics of actin-based movement by Rickettsia rickettsii in vero cells.

    PubMed

    Heinzen, R A; Grieshaber, S S; Van Kirk, L S; Devin, C J

    1999-08-01

    Actin-based motility (ABM) is a virulence mechanism exploited by invasive bacterial pathogens in the genera Listeria, Shigella, and Rickettsia. Due to experimental constraints imposed by the lack of genetic tools and their obligate intracellular nature, little is known about rickettsial ABM relative to Listeria and Shigella ABM systems. In this study, we directly compared the dynamics and behavior of ABM of Rickettsia rickettsii and Listeria monocytogenes. A time-lapse video of moving intracellular bacteria was obtained by laser-scanning confocal microscopy of infected Vero cells synthesizing beta-actin coupled to green fluorescent protein (GFP). Analysis of time-lapse images demonstrated that R. rickettsii organisms move through the cell cytoplasm at an average rate of 4.8 +/- 0.6 micrometer/min (mean +/- standard deviation). This speed was 2.5 times slower than that of L. monocytogenes, which moved at an average rate of 12.0 +/- 3.1 micrometers/min. Although rickettsiae moved more slowly, the actin filaments comprising the actin comet tail were significantly more stable, with an average half-life approximately three times that of L. monocytogenes (100.6 +/- 19.2 s versus 33.0 +/- 7.6 s, respectively). The actin tail associated with intracytoplasmic rickettsiae remained stationary in the cytoplasm as the organism moved forward. In contrast, actin tails of rickettsiae trapped within the nucleus displayed dramatic movements. The observed phenotypic differences between the ABM of Listeria and Rickettsia may indicate fundamental differences in the mechanisms of actin recruitment and polymerization.

  2. Books average previous decade of economic misery.

    PubMed

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  3. A study on the use of the BioBall® as a biofilm carrier in a sequencing batch reactor.

    PubMed

    Masłoń, Adam; Tomaszek, Janusz A

    2015-11-01

    Described in this study are experiments conducted to evaluate the removal of organics and nutrients from synthetic wastewater by a moving bed sequencing batch biofilm reactor using BioBall® carriers as biofilm media. The work involving a 15L-laboratory scale MBSBBR (moving bed sequencing batch biofilm reactor) model showed that the wastewater treatment system was based on biochemical processes taking place with activated sludge and biofilm microorganisms developing on the surface of the BioBall® carriers. Classical nitrification and denitrification and the typical enhanced biological phosphorus removal process were achieved in the reactor analyzed, which operated with a volumetric organic loading of 0.84-0.978gCODL(-1)d(-1). The average removal efficiencies for COD, total nitrogen and total phosphorus were found to be 97.7±0.5%, 87.8±2.6% and 94.3±1.3%, respectively. Nitrification efficiency reached levels in the range 96.5-99.7%. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. High-throughput machining using a high-average power ultrashort pulse laser and high-speed polygon scanner

    NASA Astrophysics Data System (ADS)

    Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo

    2016-09-01

    High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.

  5. Up-down Asymmetries in Speed Perception

    NASA Technical Reports Server (NTRS)

    Thompson, Peter; Stone, Leland S.

    1997-01-01

    We compared speed matches for pairs of stimuli that moved in opposite directions (upward and downward). Stimuli were elliptical patches (2 deg horizontally by 1 deg vertically) of horizontal sinusoidal gratings of spatial. frequency 2 cycles/deg. Two sequential 380 msec reveal presentations were compared. One of each pair of gratings (the standard) moved at 4 Hz (2 deg/sec), the other (the test) moved at a rate determined by a simple up-down staircase. The point of subjectively equal speed was calculated from the average of the last eight reversals. The task was to fixate a central point and to determine which one of the pair appeared to move faster. Eight of 10 observers perceived the upward drifting grating as moving faster than a grating moving downward but otherwise identical. on average (N = 10), when the standard moved downward, it was matched by a test moving upward at 94.7+/-1.7(SE)% of the standard speed, and when the standard moved upward it was matched by a test moving downward at 105.1+/-2.3(SE)% of the standard speed. Extending this paradigm over a range of spatial (1.5 to 13.5 c/d) and temporal (1.5 to 13.5 Hz) frequencies, preliminary results (N = 4) suggest that, under the conditions of our experiment, upward matter is seen as faster than downward for speeds greater than approx.1 deg/sec, but the effect appears to reverse at speeds below approx.1 deg/sec with downward motion perceived as faster. Given that an up-down asymmetry has been observed for the optokinetic response, both perceptual and oculomotor contributions to this phenomenon deserve exploration.

  6. Spray-coating process in preparing PTFE-PPS composite super-hydrophobic coating

    NASA Astrophysics Data System (ADS)

    Weng, Rui; Zhang, Haifeng; Liu, Xiaowei

    2014-03-01

    In order to improve the performance of a liquid-floated rotor micro-gyroscope, the resistance of the moving interface between the rotor and the floating liquid must be reduced. Hydrophobic treatment can reduce the frictional resistance between such interfaces, therefore we proposed a method to prepare a poly-tetrafluoroethylene (PTFE)-poly-phenylene sulphide (PPS) composite super-hydrophobic coating, based on a spraying process. This method can quickly prepare a continuous, uniform PTFE-PPS composite super-hydrophobic surface on a 2J85 material. This method can be divided into three steps, namely: pre-treatment; chemical etching; and spraying. The total time for this is around three hours. When the PTFE concentration is 4%, the average contact angle of the hydrophobic coating surface is 158°. If silicon dioxide nanoparticles are added, this can further improve the adhesion and mechanical strength of the super-hydrophobic composite coating. The maximum average contact angle can reach as high as 164° when the mass fraction of PTFE, PPS and silicon dioxide is 1:1:1.

  7. Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2016-01-01

    This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed () and average lowest pressure (); (4) average genesis location in terms of north latitudinal () and west longitudinal () positions; (5) sum and average power dissipation index (); (6) sum and average accumulated cyclone energy (); (7) sum and average number of storm days (); (8) sum of the number of hurricane days (NHD) and number of major hurricane days (NMHD); (9) net tropical cyclone activity index (NTCA); (10) largest individual storm (LIS) PWS, LP, PDI, ACE, NSD, NHD, NMHD; and (11) number of category 4 and 5 hurricanes (N4/5). Also examined are the December-May (D-M) and June-November (J-N) averages and 10-year moving average values of several climatic factors, including the (1) oceanic Nino index (); (2) Atlantic multi-decadal oscillation () index; (3) Atlantic meridional mode () index; (4) global land-ocean temperature index (); and (5) quasi-biennial oscillation () index. Lastly, the associational aspects (using both linear and nonparametric statistical tests) between selected tropical cyclone parameters and the climatic factors are examined based on their 10-year moving average trend values.

  8. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2016-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.

  9. Finite-Dimensional Representations for Controlled Diffusions with Delay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Federico, Salvatore, E-mail: salvatore.federico@unimi.it; Tankov, Peter, E-mail: tankov@math.univ-paris-diderot.fr

    2015-02-15

    We study stochastic delay differential equations (SDDE) where the coefficients depend on the moving averages of the state process. As a first contribution, we provide sufficient conditions under which the solution of the SDDE and a linear path functional of it admit a finite-dimensional Markovian representation. As a second contribution, we show how approximate finite-dimensional Markovian representations may be constructed when these conditions are not satisfied, and provide an estimate of the error corresponding to these approximations. These results are applied to optimal control and optimal stopping problems for stochastic systems with delay.

  10. Neural network-based run-to-run controller using exposure and resist thickness adjustment

    NASA Astrophysics Data System (ADS)

    Geary, Shane; Barry, Ronan

    2003-06-01

    This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.

  11. Collisionless dissipation in quasi-perpendicular shocks. [in terresrial bow waves

    NASA Technical Reports Server (NTRS)

    Forslund, D. W.; Quest, K. B.; Brackbill, J. U.; Lee, K.

    1984-01-01

    Microscopic dissipation processes in quasi-perpendicular shocks are studied by two-dimensional plasma simulations in which electrons and ions are treated as particles moving in self-consistent electric and magnetic fields. Cross-field currents induce substantial turbulence at the shock front reducing the reflected ion fraction, increasing the bulk ion temperature behind the shock, doubling the average magnetic ramp thickness, and enhancing the upstream field aligned electron heat flow. The short scale length magnetic fluctuations observed in the bow shock are probably associated with this turbulence.

  12. Experiment and modeling of paired effect on evacuation from a three-dimensional space

    NASA Astrophysics Data System (ADS)

    Jun, Hu; Huijun, Sun; Juan, Wei; Xiaodan, Chen; Lei, You; Musong, Gu

    2014-10-01

    A novel three-dimensional cellular automata evacuation model was proposed based on stairs factor for paired effect and variety velocities in pedestrian evacuation. In the model pedestrians' moving probability of target position at the next moment was defined based on distance profit and repulsive force profit, and evacuation strategy was elaborated in detail through analyzing variety velocities and repulsive phenomenon in moving process. At last, experiments with the simulation platform were conducted to study the relationships of evacuation time, average velocity and pedestrian velocity. The results showed that when the ratio of single pedestrian was higher in the system, the shortest route strategy was good for improving evacuation efficiency; in turn, if ratio of paired pedestrians was higher, it is good for improving evacuation efficiency to adopt strategy that avoided conflicts, and priority should be given to scattered evacuation.

  13. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  14. Photonic single nonlinear-delay dynamical node for information processing

    NASA Astrophysics Data System (ADS)

    Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel

    2012-06-01

    An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.

  15. Kinesin-microtubule interactions during gliding assays under magnetic force

    NASA Astrophysics Data System (ADS)

    Fallesen, Todd L.

    Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.

  16. Class III correction using an inter-arch spring-loaded module

    PubMed Central

    2014-01-01

    Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153

  17. Books Average Previous Decade of Economic Misery

    PubMed Central

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  18. Studies on the dynamic stability of an axially moving nanobeam based on the nonlocal strain gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan

    2018-06-01

    In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.

  19. Node-based measures of connectivity in genetic networks.

    PubMed

    Koen, Erin L; Bowman, Jeff; Wilson, Paul J

    2016-01-01

    At-site environmental conditions can have strong influences on genetic connectivity, and in particular on the immigration and settlement phases of dispersal. However, at-site processes are rarely explored in landscape genetic analyses. Networks can facilitate the study of at-site processes, where network nodes are used to model site-level effects. We used simulated genetic networks to compare and contrast the performance of 7 node-based (as opposed to edge-based) genetic connectivity metrics. We simulated increasing node connectivity by varying migration in two ways: we increased the number of migrants moving between a focal node and a set number of recipient nodes, and we increased the number of recipient nodes receiving a set number of migrants. We found that two metrics in particular, the average edge weight and the average inverse edge weight, varied linearly with simulated connectivity. Conversely, node degree was not a good measure of connectivity. We demonstrated the use of average inverse edge weight to describe the influence of at-site habitat characteristics on genetic connectivity of 653 American martens (Martes americana) in Ontario, Canada. We found that highly connected nodes had high habitat quality for marten (deep snow and high proportions of coniferous and mature forest) and were farther from the range edge. We recommend the use of node-based genetic connectivity metrics, in particular, average edge weight or average inverse edge weight, to model the influences of at-site habitat conditions on the immigration and settlement phases of dispersal. © 2015 John Wiley & Sons Ltd.

  20. Time series modelling of increased soil temperature anomalies during long period

    NASA Astrophysics Data System (ADS)

    Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar

    2015-10-01

    Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.

  1. Cause Resolving of Typhoon Precipitation Using Principle Component Analysis under Complex Interactive Effect of Terrain, Monsoon and Typhoon Vortex

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.

    2015-12-01

    This study develops a novel methodology to resolve the cause of typhoon-induced precipitation using principle component analysis (PCA) and to develop a long lead-time precipitation prediction model. The discovered spatial and temporal features of rainfall are utilized to develop a state-of-the-art descriptive statistical model which can be used to predict long lead-time precipitation during typhoons. The time series of 12-hour precipitation from different types of invasive moving track of typhoons are respectively precede the signal analytical process to qualify the causes of rainfall and to quantify affected degree of each induced cause. The causes include: (1) interaction between typhoon rain band and terrain; (2) co-movement effect induced by typhoon wind field with monsoon; (3) pressure gradient; (4) wind velocity; (5) temperature environment; (6) characteristic distance between typhoon center and surface target station; (7) distance between grade 7 storm radius and surface target station; and (8) relative humidity. The results obtained from PCA can detect the hidden pattern of the eight causes in space and time and can understand the future trends and changes of precipitation. This study applies the developed methodology in Taiwan Island which is constituted by complex diverse terrain formation and height. Results show that: (1) for the typhoon moving toward the direction of 245° to 330°, Causes (1), (2) and (6) are the primary ones to generate rainfall; and (2) for the direction of 330° to 380°, Causes (1), (4) and (6) are the primary ones. Besides, the developed precipitation prediction model by using PCA with the distributed moving track approach (PCA-DMT) is 32% more accurate by that of PCA without distributed moving track approach, and the former model can effectively achieve long lead-time precipitation prediction with an average predicted error of 13% within average 48 hours of forecasted lead-time.

  2. A strategy to decide whether to move the last case of the day in an operating room to another empty operating room to decrease overtime labor costs.

    PubMed

    Dexter, F

    2000-10-01

    We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.

  3. Processing data base information having nonwhite noise

    DOEpatents

    Gross, Kenneth C.; Morreale, Patricia

    1995-01-01

    A method and system for processing a set of data from an industrial process and/or a sensor. The method and system can include processing data from either real or calculated data related to an industrial process variable. One of the data sets can be an artificial signal data set generated by an autoregressive moving average technique. After obtaining two data sets associated with one physical variable, a difference function data set is obtained by determining the arithmetic difference between the two pairs of data sets over time. A frequency domain transformation is made of the difference function data set to obtain Fourier modes describing a composite function data set. A residual function data set is obtained by subtracting the composite function data set from the difference function data set and the residual function data set (free of nonwhite noise) is analyzed by a statistical probability ratio test to provide a validated data base.

  4. Peak Running Intensity of International Rugby: Implications for Training Prescription.

    PubMed

    Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M

    2017-09-01

    To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.

  5. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms

    PubMed Central

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875

  6. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    PubMed

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  7. Towards a sensorimotor aesthetics of performing art.

    PubMed

    Calvo-Merino, B; Jola, C; Glaser, D E; Haggard, P

    2008-09-01

    The field of neuroaesthetics attempts to identify the brain processes underlying aesthetic experience, including but not limited to beauty. Previous neuroaesthetic studies have focussed largely on paintings and music, while performing arts such as dance have been less studied. Nevertheless, increasing knowledge of the neural mechanisms that represent the bodies and actions of others, and which contribute to empathy, make a neuroaesthetics of dance timely. Here, we present the first neuroscientific study of aesthetic perception in the context of the performing arts. We investigated brain areas whose activity during passive viewing of dance stimuli was related to later, independent aesthetic evaluation of the same stimuli. Brain activity of six naïve male subjects was measured using fMRI, while they watched 24 dance movements, and performed an irrelevant task. In a later session, participants rated each movement along a set of established aesthetic dimensions. The ratings were used to identify brain regions that were more active when viewing moves that received high average ratings than moves that received low average ratings. This contrast revealed bilateral activity in the occipital cortices and in right premotor cortex. Our results suggest a possible role of visual and sensorimotor brain areas in an automatic aesthetic response to dance. This sensorimotor response may explain why dance is widely appreciated in so many human cultures.

  8. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  9. Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)

    NASA Astrophysics Data System (ADS)

    Jerolmack, D. J.

    2010-12-01

    The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic transport laws. Flooding on an experimental alluvial fan. Intensity is related to the cumulative amount of time flow has visited an area of the fan over the experiment. Dark areas represent an emergent channel network resulting from stochastic migration of river channels.

  10. An emission processing system for air quality modelling in the Mexico City metropolitan area: Evaluation and comparison of the MOBILE6.2-Mexico and MOVES-Mexico traffic emissions.

    PubMed

    Guevara, M; Tena, C; Soret, A; Serradell, K; Guzmán, D; Retama, A; Camacho, P; Jaimes-Palomera, M; Mediavilla, A

    2017-04-15

    This article describes the High-Elective Resolution Modelling Emission System for Mexico (HERMES-Mex) model, an emission processing tool developed to transform the official Mexico City Metropolitan Area (MCMA) emission inventory into hourly, gridded (up to 1km 2 ) and speciated emissions used to drive mesoscale air quality simulations with the Community Multi-scale Air Quality (CMAQ) model. The methods and ancillary information used for the spatial and temporal disaggregation and speciation of the emissions are presented and discussed. The resulting emission system is evaluated, and a case study on CO, NO 2 , O 3 , VOC and PM 2.5 concentrations is conducted to demonstrate its applicability. Moreover, resulting traffic emissions from the Mobile Source Emission Factor Model for Mexico (MOBILE6.2-Mexico) and the MOtor Vehicle Emission Simulator for Mexico (MOVES-Mexico) models are integrated in the tool to assess and compare their performance. NO x and VOC total emissions modelled are reduced by 37% and 26% in the MCMA when replacing MOBILE6.2-Mexico for MOVES-Mexico traffic emissions. In terms of air quality, the system composed by the Weather Research and Forecasting model (WRF) coupled with the HERMES-Mex and CMAQ models properly reproduces the pollutant levels and patterns measured in the MCMA. The system's performance clearly improves in urban stations with a strong influence of traffic sources when applying MOVES-Mexico emissions. Despite reducing estimations of modelled precursor emissions, O 3 peak averages are increased in the MCMA core urban area (up to 30ppb) when using MOVES-Mexico mobile emissions due to its VOC-limited regime, while concentrations in the surrounding suburban/rural areas decrease or increase depending on the meteorological conditions of the day. The results obtained suggest that the HERMES-Mex model can be used to provide model-ready emissions for air quality modelling in the MCMA. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  12. Beyond long memory in heart rate variability: An approach based on fractionally integrated autoregressive moving average time series models with conditional heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria

    2013-06-01

    Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.

  13. Numerical investigation of the relationship between magnetic stiffness and minor loop size in the HTS levitation system

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Li, Chengshan

    2017-10-01

    The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.

  14. Time-resolved distance determination by tryptophan fluorescence quenching: probing intermediates in membrane protein folding.

    PubMed

    Kleinschmidt, J H; Tamm, L K

    1999-04-20

    The mechanism of insertion and folding of an integral membrane protein has been investigated with the beta-barrel forming outer membrane protein A (OmpA) of Escherichia coli. This work describes a new approach to this problem by combining structural information obtained from tryptophan fluorescence quenching at different depths in the lipid bilayer with the kinetics of the refolding process. Experiments carried out over a temperature range between 2 and 40 degrees C allowed us to detect, trap, and characterize previously unidentified folding intermediates on the pathway of OmpA insertion and folding into lipid bilayers. Three membrane-bound intermediates were found in which the average distances of the Trps were 14-16, 10-11, and 0-5 A, respectively, from the bilayer center. The first folding intermediate is stable at 2 degrees C for at least 1 h. A second intermediate has been isolated at temperatures between 7 and 20 degrees C. The Trps move 4-5 A closer to the center of the bilayer at this stage. Subsequently, in an intermediate that is observable at 26-28 degrees C, the Trps move another 5-10 A closer to the center of the bilayer. The final (native) structure is observed at higher temperatures of refolding. In this structure, the Trps are located on average about 9-10 A from the bilayer center. Monitoring the evolution of Trp fluorescence quenching by a set of brominated lipids during refolding at various temperatures therefore allowed us to identify and characterize intermediate states in the folding process of an integral membrane protein.

  15. Occupational injuries and sick leaves in household moving works.

    PubMed

    Hwan Park, Myoung; Jeong, Byung Yong

    2017-09-01

    This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.

  16. Passenger Flow Forecasting Research for Airport Terminal Based on SARIMA Time Series Model

    NASA Astrophysics Data System (ADS)

    Li, Ziyu; Bi, Jun; Li, Zhiyin

    2017-12-01

    Based on the data of practical operating of Kunming Changshui International Airport during2016, this paper proposes Seasonal Autoregressive Integrated Moving Average (SARIMA) model to predict the passenger flow. This article not only considers the non-stationary and autocorrelation of the sequence, but also considers the daily periodicity of the sequence. The prediction results can accurately describe the change trend of airport passenger flow and provide scientific decision support for the optimal allocation of airport resources and optimization of departure process. The result shows that this model is applicable to the short-term prediction of airport terminal departure passenger traffic and the average error ranges from 1% to 3%. The difference between the predicted and the true values of passenger traffic flow is quite small, which indicates that the model has fairly good passenger traffic flow prediction ability.

  17. Suspended Sediment Moves 10 km Before Entering Storage: Re-Interpreting a 20th Century Industrial Mercury Release as a Tracer Experiment, South River, Virginia

    NASA Astrophysics Data System (ADS)

    Pizzuto, J. E.

    2014-12-01

    Recent analyses suggest that the velocity of downstream transport of suspended sediment (averaged over long timescales that include periods of transport and storage in alluvial deposits) can be represented as the ratio Ls/T, where Ls is a distance particles move before entering storage and T is the waiting time particles spend in storage before being remobilized. Sediment budget analyses suggest that Ls is 1-100 km in the mid-Atlantic region, while T may be ~103 years, such that particles move 3-5 orders of magnitude slower than the water in the channel. Given the well-known inaccuracy of sediment budgets, independent verification from a tracer study would be desirable. Here, an historic industrial release of mercury is interpreted as a decadal sediment tracer experiment, releasing sediment particles "tagged" with mercury that are deposited on floodplains. As expected, floodplain mercury inventories decrease exponentially downstream, with a characteristic decay length of 10 km (95% confidence interval: 5-25 km) that defines the distance suspended particles typically move downstream before entering storage. Floodplain mercury inventories are not significantly different above and below three colonial age mill dams (present at the time of mercury release but now breached), suggesting that these results reflect ongoing processes. Suspended sediment routing models that neglect long-term storage, and the watershed management plans based on them, may need revision.

  18. Distractor Interference during Smooth Pursuit Eye Movements

    ERIC Educational Resources Information Center

    Spering, Miriam; Gegenfurtner, Karl R.; Kerzel, Dirk

    2006-01-01

    When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show…

  19. In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.

    PubMed

    Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth

    2015-03-01

    The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.

  20. Long-Term PM2.5 Exposure and Respiratory, Cancer, and Cardiovascular Mortality in Older US Adults.

    PubMed

    Pun, Vivian C; Kazemiparkouhi, Fatemeh; Manjourides, Justin; Suh, Helen H

    2017-10-15

    The impact of chronic exposure to fine particulate matter (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM2.5)) on respiratory disease and lung cancer mortality is poorly understood. In a cohort of 18.9 million Medicare beneficiaries (4.2 million deaths) living across the conterminous United States between 2000 and 2008, we examined the association between chronic PM2.5 exposure and cause-specific mortality. We evaluated confounding through adjustment for neighborhood behavioral covariates and decomposition of PM2.5 into 2 spatiotemporal scales. We found significantly positive associations of 12-month moving average PM2.5 exposures (per 10-μg/m3 increase) with respiratory, chronic obstructive pulmonary disease, and pneumonia mortality, with risk ratios ranging from 1.10 to 1.24. We also found significant PM2.5-associated elevated risks for cardiovascular and lung cancer mortality. Risk ratios generally increased with longer moving averages; for example, an elevation in 60-month moving average PM2.5 exposures was linked to 1.33 times the lung cancer mortality risk (95% confidence interval: 1.24, 1.40), as compared with 1.13 (95% confidence interval: 1.11, 1.15) for 12-month moving average exposures. Observed associations were robust in multivariable models, although evidence of unmeasured confounding remained. In this large cohort of US elderly, we provide important new evidence that long-term PM2.5 exposure is significantly related to increased mortality from respiratory disease, lung cancer, and cardiovascular disease. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Effect of air pollution on pediatric respiratory emergency room visits and hospital admissions.

    PubMed

    Farhat, S C L; Paulo, R L P; Shimoda, T M; Conceição, G M S; Lin, C A; Braga, A L F; Warth, M P N; Saldiva, P H N

    2005-02-01

    In order to assess the effect of air pollution on pediatric respiratory morbidity, we carried out a time series study using daily levels of PM10, SO2, NO2, ozone, and CO and daily numbers of pediatric respiratory emergency room visits and hospital admissions at the Children's Institute of the University of Sao Paulo Medical School, from August 1996 to August 1997. In this period there were 43,635 hospital emergency room visits, 4534 of which were due to lower respiratory tract disease. The total number of hospital admissions was 6785, 1021 of which were due to lower respiratory tract infectious and/or obstructive diseases. The three health end-points under investigation were the daily number of emergency room visits due to lower respiratory tract diseases, hospital admissions due to pneumonia, and hospital admissions due to asthma or bronchiolitis. Generalized additive Poisson regression models were fitted, controlling for smooth functions of time, temperature and humidity, and an indicator of weekdays. NO2 was positively associated with all outcomes. Interquartile range increases (65.04 microg/m3) in NO2 moving averages were associated with an 18.4% increase (95% confidence interval, 95% CI = 12.5-24.3) in emergency room visits due to lower respiratory tract diseases (4-day moving average), a 17.6% increase (95% CI = 3.3-32.7) in hospital admissions due to pneumonia or bronchopneumonia (3-day moving average), and a 31.4% increase (95% CI = 7.2-55.7) in hospital admissions due to asthma or bronchiolitis (2-day moving average). The study showed that air pollution considerably affects children's respiratory morbidity, deserving attention from the health authorities.

  2. Dog days of summer: Influences on decision of wolves to move pups

    USGS Publications Warehouse

    Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.

    2016-01-01

    For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.

  3. Nonparametric Transfer Function Models

    PubMed Central

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  4. An improved portmanteau test for autocorrelated errors in interrupted time-series regression models.

    PubMed

    Huitema, Bradley E; McKean, Joseph W

    2007-08-01

    A new portmanteau test for autocorrelation among the errors of interrupted time-series regression models is proposed. Simulation results demonstrate that the inferential properties of the proposed Q(H-M) test statistic are considerably more satisfactory than those of the well known Ljung-Box test and moderately better than those of the Box-Pierce test. These conclusions generally hold for a wide variety of autoregressive (AR), moving averages (MA), and ARMA error processes that are associated with time-series regression models of the form described in Huitema and McKean (2000a, 2000b).

  5. The Onset Time of the Ownership Sensation in the Moving Rubber Hand Illusion.

    PubMed

    Kalckert, Andreas; Ehrsson, H H

    2017-01-01

    The rubber hand illusion (RHI) is a perceptual illusion whereby a model hand is perceived as part of one's own body. This illusion has been extensively studied, but little is known about the temporal evolution of this perceptual phenomenon, i.e., how long it takes until participants start to experience ownership over the model hand. In the present study, we investigated a version of the rubber hand experiment based on finger movements and measured the average onset time in active and passive movement conditions. This comparison enabled us to further explore the possible role of intentions and motor control processes that are only present in the active movement condition. The results from a large group of healthy participants ( n = 117) showed that the illusion of ownership took approximately 23 s to emerge (active: 22.8; passive: 23.2). The 90th percentile occurs in both conditions within approximately 50 s (active: 50; passive: 50.6); therefore, most participants experience the illusion within the first minute. We found indirect evidence of a facilitatory effect of active movements compared to passive movements, and we discuss these results in the context of our current understanding of the processes underlying the moving RHI.

  6. Compact 3D Camera for Shake-the-Box Particle Tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  7. Work-related accidents among the Iranian population: a time series analysis, 2000–2011

    PubMed Central

    Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774

  8. Work-related accidents among the Iranian population: a time series analysis, 2000-2011.

    PubMed

    Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.

  9. Distributed parameter system coupled ARMA expansion identification and adaptive parallel IIR filtering - A unified problem statement. [Auto Regressive Moving-Average

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Balas, M. J.

    1980-01-01

    A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.

  10. Forecasting daily meteorological time series using ARIMA and regression models

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  11. A fixed-memory moving, expanding window for obtaining scatter corrections in X-ray CT and other stochastic averages

    NASA Astrophysics Data System (ADS)

    Levine, Zachary H.; Pintar, Adam L.

    2015-11-01

    A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.

  12. Detrending moving average algorithm for multifractals

    NASA Astrophysics Data System (ADS)

    Gu, Gao-Feng; Zhou, Wei-Xing

    2010-07-01

    The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.

  13. [Comparison of predictive effect between the single auto regressive integrated moving average (ARIMA) model and the ARIMA-generalized regression neural network (GRNN) combination model on the incidence of scarlet fever].

    PubMed

    Zhu, Yu; Xia, Jie-lai; Wang, Jing

    2009-09-01

    Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.

  14. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  15. Human-animal interactions and safety during dairy cattle handling--Comparing moving cows to milking and hoof trimming.

    PubMed

    Lindahl, C; Pinzke, S; Herlin, A; Keeling, L J

    2016-03-01

    Cattle handling is a dangerous activity on dairy farms, and cows are a major cause of injuries to livestock handlers. Even if dairy cows are generally tranquil and docile, when situations occur that they perceive or remember as aversive, they may become agitated and hazardous to handle. This study aimed to compare human-animal interactions, cow behavior, and handler safety when moving cows to daily milking and moving cows to more rarely occurring and possibly aversive hoof trimming. These processes were observed on 12 Swedish commercial dairy farms. The study included behavioral observations of handler and cows and cow heart rate recordings, as well as recording frequencies of situations and incidents related to an increased injury risk to the handler. At milking, cows were quite easily moved using few interactions. As expected, the cows showed no behavioral signs of stress, fear, or resistance and their heart rate only rose slightly from the baseline (i.e., the average heart rate during an undisturbed period before handling). Moving cows to hoof trimming involved more forceful and gentle interactions compared with moving cows to milking. Furthermore, the cows showed much higher frequencies of behaviors indicative of aversion and fear (e.g., freezing, balking, and resistance), as well as a higher increase in heart rate. The risk of injury to which handlers were exposed also increased when moving cows to hoof trimming rather than to routine milking. Some interactions (such as forceful tactile interactions with an object and pulling a neck strap or halter) appeared to be related to potentially dangerous incidents where the handler was being kicked, head-butted, or run over by a cow. In conclusion, moving cows to hoof trimming resulted in higher frequencies of behaviors indicating fear, more forceful interactions, and increased injury risks to the handler than moving cows to milking. Improving potentially stressful handling procedures (e.g., by better animal handling practices and preparation of cows to cope with such procedures) can increase handler safety, animal welfare, ease of handling, and efficiency. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Industrial Based Migration in India. A Case Study of Dumdum "Dunlop Industrial Zone"

    NASA Astrophysics Data System (ADS)

    Das, Biplab; Bandyopadhyay, Aditya; Sen, Jayashree

    2012-10-01

    Migration is a very important part in our present society. Basically Millions of people moved during the industrial revolution. Some simply moved from a village to a town in the hope of finding work whilst others moved from one country to another in search of a better way of life. The main reason for moving home during the 19th century was to find work. On one hand this involved migration from the countryside to the growing industrial cities, on the other it involved rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage,birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family.Migration was not just people moving out of the country, it also invloved a lot of people moving into Britain. In the 1840's Ireland suffered a terrible famine. Faced with a massive cost of feeding the starving population many local landowners paid for labourers to emigrate.There was a shift away from agriculturally based rural dwelling towards urban habitation to meet the mass demand for labour that new industry required. There became great regional differences in population levels and in the structure of their demography. This was due to rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage, birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family. There is n serious disagreement as to the extent of the population changes that occurred but one key question that always arouses debate is that of whether an expanding population resulted in economic growth or vice versa, i.e. was industrialization a catalyst for population growth? A clear answer is difficult to decipher as the two variables are so closely and fundamentally interlinked, but it seems that both factors provided impetus for each otherís take off. If anything, population and economic growth were complimentary towards one another rather than simply being causative factors.

  17. SU-F-T-497: Spatiotemporally Optimal, Personalized Prescription Scheme for Glioblastoma Patients Using the Proliferation and Invasion Glioma Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Rockhill, J; Phillips, M

    Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less

  18. Preliminary Retrospective Analysis of Daily Tomotherapy Output Constancy Checks Using Statistical Process Control.

    PubMed

    Mezzenga, Emilio; D'Errico, Vincenzo; Sarnelli, Anna; Strigari, Lidia; Menghi, Enrico; Marcocci, Francesco; Bianchini, David; Benassi, Marcello

    2016-01-01

    The purpose of this study was to retrospectively evaluate the results from a Helical TomoTherapy Hi-Art treatment system relating to quality controls based on daily static and dynamic output checks using statistical process control methods. Individual value X-charts, exponentially weighted moving average charts, and process capability and acceptability indices were used to monitor the treatment system performance. Daily output values measured from January 2014 to January 2015 were considered. The results obtained showed that, although the process was in control, there was an out-of-control situation in the principal maintenance intervention for the treatment system. In particular, process capability indices showed a decreasing percentage of points in control which was, however, acceptable according to AAPM TG148 guidelines. Our findings underline the importance of restricting the acceptable range of daily output checks and suggest a future line of investigation for a detailed process control of daily output checks for the Helical TomoTherapy Hi-Art treatment system.

  19. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  20. The Micromechanics of the Moving Contact Line

    NASA Technical Reports Server (NTRS)

    Han, Minsub; Lichter, Seth; Lin, Chih-Yu; Perng, Yeong-Yan

    1996-01-01

    The proposed research is divided into three components concerned with molecular structure, molecular orientation, and continuum averages of discrete systems. In the experimental program, we propose exploring how changes in interfacial molecular structure generate contact line motion. Rather than rely on the electrostatic and electrokinetic fields arising from the molecules themselves, we augment their interactions by an imposed field at the solid/liquid interface. By controling the field, we can manipulate the molecular structure at the solid/liquid interface. In response to controlled changes in molecular structure, we observe the resultant contact line motion. In the analytical portion of the proposed research we seek to formulate a system of equations governing fluid motion which accounts for the orientation of fluid molecules. In preliminary work, we have focused on describing how molecular orientation affects the forces generated at the moving contact line. Ideally, as assumed above, the discrete behavior of molecules can be averaged into a continuum theory. In the numerical portion of the proposed research, we inquire whether the contact line region is, in fact, large enough to possess a well-defined average. Additionally, we ask what types of behavior distinguish discrete systems from continuum systems. Might the smallness of the contact line region, in itself, lead to behavior different from that in the bulk? Taken together, our proposed research seeks to identify and accurately account for some of the molecular dynamics of the moving contact line, and attempts to formulate a description from which one can compute the forces at the moving contact line.

  1. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  2. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  3. Oscillations Excited by Plasmoids Formed During Magnetic Reconnection in a Vertical Gravitationally Stratified Current Sheet

    NASA Astrophysics Data System (ADS)

    Jelínek, P.; Karlický, M.; Van Doorsselaere, T.; Bárta, M.

    2017-10-01

    Using the FLASH code, which solves the full set of the 2D non-ideal (resistive) time-dependent magnetohydrodynamic (MHD) equations, we study processes during the magnetic reconnection in a vertical gravitationally stratified current sheet. We show that during these processes, which correspond to processes in solar flares, plasmoids are formed due to the tearing mode instability of the current sheet. These plasmoids move upward or downward along the vertical current sheet and some of them merge into larger plasmoids. We study the density and temperature structure of these plasmoids and their time evolution in detail. We found that during the merging of two plasmoids, the resulting larger plasmoid starts to oscillate with a period largely determined by L/{c}{{A}}, where L is the size of the plasmoid and c A is the Alfvén speed in the lateral parts of the plasmoid. In our model, L/{c}{{A}} evaluates to ˜ 25 {{s}}. Furthermore, the plasmoid moving downward merges with the underlying flare arcade, which causes oscillations of the arcade. In our model, the period of this arcade oscillation is ˜ 35 {{s}}, which also corresponds to L/{c}{{A}}, but here L means the length of the loop and c A is the average Alfvén speed in the loop. We also show that the merging process of the plasmoid with the flare arcade is a complex process as presented by complex density and temperature structures of the oscillating arcade. Moreover, all these processes are associated with magnetoacoustic waves produced by the motion and merging of plasmoids.

  4. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  5. Use of spatiotemporal characteristics of ambient PM2.5 in rural South India to infer local versus regional contributions.

    PubMed

    Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D

    2018-08-01

    This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Removal and fate of micropollutants in a sponge-based moving bed bioreactor.

    PubMed

    Luo, Yunlong; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Long Duc; Hai, Faisal Ibney; Kang, Jinguo; Xia, Siqing; Zhang, Zhiqiang; Price, William Evan

    2014-05-01

    This study investigated the removal of micropollutants using polyurethane sponge as attached-growth carrier. Batch experiments demonstrated that micropollutants could adsorb to non-acclimatized sponge cubes to varying extents. Acclimatized sponge showed significantly enhanced removal of some less hydrophobic compounds (log D<2.5), such as ibuprofen, acetaminophen, naproxen, and estriol, as compared with non-acclimatized sponge. The results for bench-scale sponge-based moving bed bioreactor (MBBR) system elucidated compound-specific variation in removal, ranging from 25.9% (carbamazepine) to 96.8% (β-Estradiol 17-acetate) on average. In the MBBR system, biodegradation served as a major removal pathway for most compounds. However, sorption to sludge phase was also a notable removal mechanism of some persistent micropollutants. Particularly, carbamazepine, ketoprofen and pentachlorophenol were found at high concentrations (7.87, 6.05 and 5.55 μg/g, respectively) on suspended biosolids. As a whole, the effectiveness of MBBR for micropollutant removal was comparable with those of activated sludge processes and MBRs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  8. The sensitivity in the IR spectrum of the intact and pathological tissues by laser biophotometry.

    PubMed

    Ravariu, Cristian; Bondarciuc, Ala

    2014-03-01

    In this paper, we use the laser biophotometry for in vivo investigations, searching the most sensitive interactions of the near-infrared spectrum with different tissues. The experimental methods are based on the average reflection coefficient (ARC) measurements. For healthy persons, ARC is the average of five values provided by the biophotometer. The probe is applied on dry skin with minimum pilosity, in five regions: left-right shank, left-right forearm, and epigastrium. For the pathological tissues, the emitting terminal is moved over the suspected area, controlling the reflection coefficient level, till a minimum value occurs, as ARC-Pathological. Then, the probe is moved on the symmetrical healthy region of the body to read the complementary coefficient from intact tissue, ARC-Intact, from the same patient. The experimental results show an ARC range between 67 and 59 mW for intact tissues and a lower range, up to 58-42 mW, for pathological tissues. The method is efficient only in those pathological processes accompanied by variable skin depigmentation, water retention, inflammation, thrombosis, or swelling. Frequently, the ARC ranges are overlapping for some diseases. This induces uncertain diagnosis. Therefore, a statistical algorithm is adopted for a differential diagnosis. The laser biophotometry provides a quantitative biometric parameter, ARC, suitable for fast diagnosis in the internal and emergency medicine. These laser biophotometry measurements are representatives for the Romanian clinical trials.

  9. Development of a fuel cell plug-in hybrid electric vehicle and vehicle simulator for energy management assessment

    NASA Astrophysics Data System (ADS)

    Meintz, Andrew Lee

    This dissertation offers a description of the development of a fuel cell plug-in hybrid electric vehicle focusing on the propulsion architecture selection, propulsion system control, and high-level energy management. Two energy management techniques have been developed and implemented for real-time control of the vehicle. The first method is a heuristic method that relies on a short-term moving average of the vehicle power requirements. The second method utilizes an affine function of the short-term and long-term moving average vehicle power requirements. The development process of these methods has required the creation of a vehicle simulator capable of estimating the effect of changes to the energy management control techniques on the overall vehicle energy efficiency. Furthermore, the simulator has allowed for the refinement of the energy management methods and for the stability of the method to be analyzed prior to on-road testing. This simulator has been verified through on-road testing of a constructed prototype vehicle under both highway and city driving schedules for each energy management method. The results of the finalized vehicle control strategies are compared with the simulator predictions and an assessment of the effectiveness of both strategies is discussed. The methods have been evaluated for energy consumption in the form of both hydrogen fuel and stored electricity from grid charging.

  10. Granger causality for state-space models

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Seth, Anil K.

    2015-04-01

    Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.

  11. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    PubMed Central

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  12. Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach

    NASA Astrophysics Data System (ADS)

    Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa

    2017-03-01

    Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.

  13. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  14. Are Math Grades Cyclical?

    ERIC Educational Resources Information Center

    Adams, Gerald J.; Dial, Micah

    1998-01-01

    The cyclical nature of mathematics grades was studied for a cohort of elementary school students from a large metropolitan school district in Texas over six years (average cohort size of 8495). The study used an autoregressive integrated moving average (ARIMA) model. Results indicate that grades do exhibit a significant cyclical pattern. (SLD)

  15. Comparing methods for modelling spreading cell fronts.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E

    2014-07-21

    Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations

    NASA Technical Reports Server (NTRS)

    Roussel-Dupre, D.; Shine, R. A.

    1982-01-01

    Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.

  17. Robust human detection, tracking, and recognition in crowded urban areas

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.

  18. A real time ECG signal processing application for arrhythmia detection on portable devices

    NASA Astrophysics Data System (ADS)

    Georganis, A.; Doulgeraki, N.; Asvestas, P.

    2017-11-01

    Arrhythmia describes the disorders of normal heart rate, which, depending on the case, can even be fatal for a patient with severe history of heart disease. The purpose of this work is to develop an application for heart signal visualization, processing and analysis in Android portable devices e.g. Mobile phones, tablets, etc. The application is able to retrieve the signal initially from a file and at a later stage this signal is processed and analysed within the device so that it can be classified according to the features of the arrhythmia. In the processing and analysing stage, different algorithms are included among them the Moving Average and Pan Tompkins algorithm as well as the use of wavelets, in order to extract features and characteristics. At the final stage, testing is performed by simulating our application in real-time records, using the TCP network protocol for communicating the mobile with a simulated signal source. The classification of ECG beat to be processed is performed by neural networks.

  19. Detection of ɛ-ergodicity breaking in experimental data—A study of the dynamical functional sensibility

    NASA Astrophysics Data System (ADS)

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-01

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ɛ-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ɛ for ɛ-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  20. Detection of ε-ergodicity breaking in experimental data-A study of the dynamical functional sensibility.

    PubMed

    Loch-Olszewska, Hanna; Szwabiński, Janusz

    2018-05-28

    The ergodicity breaking phenomenon has already been in the area of interest of many scientists, who tried to uncover its biological and chemical origins. Unfortunately, testing ergodicity in real-life data can be challenging, as sample paths are often too short for approximating their asymptotic behaviour. In this paper, the authors analyze the minimal lengths of empirical trajectories needed for claiming the ε-ergodicity based on two commonly used variants of an autoregressive fractionally integrated moving average model. The dependence of the dynamical functional on the parameters of the process is studied. The problem of choosing proper ε for ε-ergodicity testing is discussed with respect to especially the variation of the innovation process and the data sample length, with a presentation on two real-life examples.

  1. Statistical process control based chart for information systems security

    NASA Astrophysics Data System (ADS)

    Khan, Mansoor S.; Cui, Lirong

    2015-07-01

    Intrusion detection systems have a highly significant role in securing computer networks and information systems. To assure the reliability and quality of computer networks and information systems, it is highly desirable to develop techniques that detect intrusions into information systems. We put forward the concept of statistical process control (SPC) in computer networks and information systems intrusions. In this article we propose exponentially weighted moving average (EWMA) type quality monitoring scheme. Our proposed scheme has only one parameter which differentiates it from the past versions. We construct the control limits for the proposed scheme and investigate their effectiveness. We provide an industrial example for the sake of clarity for practitioner. We give comparison of the proposed scheme with EWMA schemes and p chart; finally we provide some recommendations for the future work.

  2. A New Trend-Following Indicator: Using SSA to Design Trading Rules

    NASA Astrophysics Data System (ADS)

    Leles, Michel Carlo Rodrigues; Mozelli, Leonardo Amaral; Guimarães, Homero Nogueira

    Singular Spectrum Analysis (SSA) is a non-parametric approach that can be used to decompose a time-series as trends, oscillations and noise. Trend-following strategies rely on the principle that financial markets move in trends for an extended period of time. Moving Averages (MAs) are the standard indicator to design such strategies. In this study, SSA is used as an alternative method to enhance trend resolution in comparison with the traditional MA. New trading rules using SSA as indicator are proposed. This paper shows that for the Down Jones Industrial Average (DJIA) and Shangai Securities Composite Index (SSCI) time-series the SSA trading rules provided, in general, better results in comparison to MA trading rules.

  3. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    PubMed

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.

  4. Digital image processing of Seabeam bathymetric data for structural studies of seamounts near the East Pacific Rise

    NASA Technical Reports Server (NTRS)

    Edwards, M. H.; Arvidson, R. E.; Guinness, E. A.

    1984-01-01

    The problem of displaying information on the seafloor morphology is attacked by utilizing digital image processing techniques to generate images for Seabeam data covering three young seamounts on the eastern flank of the East Pacific Rise. Errors in locations between crossing tracks are corrected by interactively identifying features and translating tracks relative to a control track. Spatial interpolation techniques using moving averages are used to interpolate between gridded depth values to produce images in shaded relief and color-coded forms. The digitally processed images clarify the structural control on seamount growth and clearly show the lateral extent of volcanic materials, including the distribution and fault control of subsidiary volcanic constructional features. The image presentations also clearly show artifacts related to both residual navigational errors and to depth or location differences that depend on ship heading relative to slope orientation in regions with steep slopes.

  5. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  6. Family Structure, Residential Mobility, and Environmental Inequality

    PubMed Central

    Downey, Liam; Crowder, Kyle; Kemp, Robert J.

    2016-01-01

    This study combines micro-level data on families with children from the Panel Study of Income Dynamics with neighborhood-level industrial hazard data from the Environmental Protection Agency and neighborhood-level U.S. census data to examine both the association between family structure and residential proximity to neighborhood pollution and the micro-level, residential mobility processes that contribute to differential pollution proximity across family types. Results indicate the existence of significant family structure differences in household proximity to industrial pollution in U.S. metropolitan areas between 1990 and 1999, with single-mother and single-father families experiencing neighborhood pollution levels that are on average 46% and 26% greater, respectively, than those experienced by two-parent families. Moreover, the pollution gap between single-mother and two-parent families persists with controls for household and neighborhood socioeconomic, sociodemographic, and race/ethnic characteristics. Examination of underlying migration patterns reveals that single-mother, single-father, and two-parent families are equally likely to move in response to pollution. However, mobile single-parent families move into neighborhoods with significantly higher pollution levels than do mobile two-parent families. Thus, family structure differences in pollution proximity are maintained more by these destination neighborhood differences than by family structure variations in the likelihood of moving out of polluted neighborhoods. PMID:28348440

  7. Distractor interference during smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R; Kerzel, Dirk

    2006-10-01

    When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show that at 140 ms after distractor onset, horizontal eye velocity is decreased by about 25%. Vertical eye velocity increases or decreases by 1 degrees /s in the direction opposite from the distractor. This deviation varies in size with distractor direction, velocity, and contrast. The effect was present during the initiation and steady-state tracking phase of pursuit but only when the observer had prior information about target motion. Neither vector averaging nor winner-take-all models could predict the response to a moving to-be-ignored distractor during steady-state tracking of a predefined target. The contributions of perceptual mislocalization and spatial attention to the vertical deviation in pursuit are discussed. Copyright 2006 APA.

  8. Changes in healthcare use among individuals who move into public housing: a population-based investigation.

    PubMed

    Hinds, Aynslie M; Bechtel, Brian; Distasio, Jino; Roos, Leslie L; Lix, Lisa M

    2018-06-05

    Residence in public housing, a subsidized and managed government program, may affect health and healthcare utilization. We compared healthcare use in the year before individuals moved into public housing with usage during their first year of tenancy. We also described trends in use. We used linked population-based administrative data housed in the Population Research Data Repository at the Manitoba Centre for Health Policy. The cohort consisted of individuals who moved into public housing in 2009 and 2010. We counted the number of hospitalizations, general practitioner (GP) visits, specialist visits, emergency department visits, and prescriptions drugs dispensed in the twelve 30-day intervals (i.e., months) immediately preceding and following the public housing move-in date. Generalized linear models with generalized estimating equations tested for a period (pre/post-move-in) by month interaction. Odds ratios (ORs), incident rate ratios (IRRs), and means are reported along with 95% confidence intervals (95% CIs). The cohort included 1942 individuals; the majority were female (73.4%) who lived in low income areas and received government assistance (68.1%). On average, the cohort had more than four health conditions. Over the 24 30-day intervals, the percentage of the cohort that visited a GP, specialist, and an emergency department ranged between 37.0% and 43.0%, 10.0% and 14.0%, and 6.0% and 10.0%, respectively, while the percentage of the cohort hospitalized ranged from 1.0% to 5.0%. Generally, these percentages were highest in the few months before the move-in date and lowest in the few months after the move-in date. The period by month interaction was statistically significant for hospitalizations, GP visits, and prescription drug use. The average change in the odds, rate, or mean was smaller in the post-move-in period than in the pre-move-in period. Use of some healthcare services declined after people moved into public housing; however, the decrease was only observed in the first few months and utilization rebounded. Knowledge of healthcare trends before individuals move in are informative for ensuring the appropriate supports are available to new public housing residents. Further study is needed to determine if decreased healthcare utilization following a move is attributable to decreased access.

  9. The change of sleeping and lying posture of Japanese black cows after moving into new environment.

    PubMed

    Fukasawa, Michiru; Komatsu, Tokushi; Higashiyama, Yumi

    2018-04-25

    The environmental change is one of the stressful events in livestock production. Change in environment disturbed cow behavior and cows needed several days to reach stable behavioral pattern, especially sleeping posture (SP) and lying posture (LP) have been used as an indicator for relax and well-acclimated to its environment. The aim of this study examines how long does Japanese black cow required for stabilization of SP and LP after moving into new environment. Seven pregnant Japanese black cows were used. Cows were moved into new tie-stall shed and measured sleeping and lying posture 17 times during 35 experimental days. Both SP and LP were detected by accelerometer fixed on middle occipital and hip-cross, respectively. Daily total time, frequency, and average bout of both SP and LP were calculated. Daily SP time was the shortest on day 1, and increased to the highest on day3. It decreased until day 9, after that stabilized about 65 min /day till the end of experiment. The longest average SP bout was shown on day 1, and it decreased to stabilize till day 7. Daily LP time was changed as same manner as daily SP time. The average SP bout showed the longest on day 1, and it decreased to stable level till day 7. On the other hand, the average LP bout showed the shortest on day1, and it was increased to stable level till on day 7. These results showed that pregnant Japanese black cows needed 1 week to stabilize their SP. However, there were different change pattern between the average SP and LP bout, even though the change pattern of daily SP and LP time were similar.

  10. Move-by-move dynamics of the advantage in chess matches reveals population-level learning of the game.

    PubMed

    Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.

  11. Move-by-Move Dynamics of the Advantage in Chess Matches Reveals Population-Level Learning of the Game

    PubMed Central

    Ribeiro, Haroldo V.; Mendes, Renio S.; Lenzi, Ervin K.; del Castillo-Mussot, Marcelo; Amaral, Luís A. N.

    2013-01-01

    The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments. PMID:23382876

  12. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  13. A novel algorithm for Bluetooth ECG.

    PubMed

    Pandya, Utpal T; Desai, Uday B

    2012-11-01

    In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.

  14. Leg kinematics and muscle activity during treadmill running in the cockroach, Blaberus discoidalis: I. Slow running.

    PubMed

    Watson, J T; Ritzmann, R E

    1998-01-01

    We have combined high-speed video motion analysis of leg movements with electromyogram (EMG) recordings from leg muscles in cockroaches running on a treadmill. The mesothoracic (T2) and metathoracic (T3) legs have different kinematics. While in each leg the coxa-femur (CF) joint moves in unison with the femurtibia (FT) joint, the relative joint excursions differ between T2 and T3 legs. In T3 legs, the two joints move through approximately the same excursion. In T2 legs, the FT joint moves through a narrower range of angles than the CF joint. In spite of these differences in motion, no differences between the T2 and T3 legs were seen in timing or qualitative patterns of depressor coxa and extensor tibia activity. The average firing frequencies of slow depressor coxa (Ds) and slow extensor tibia (SETi) motor neurons are directly proportional to the average angular velocity of their joints during stance. The average Ds and SETi firing frequency appears to be modulated on a cycle-by-cycle basis to control running speed and orientation. In contrast, while the frequency variations within Ds and SETi bursts were consistent across cycles, the variations within each burst did not parallel variations in the velocity of the relevant joints.

  15. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  16. Extremes of Moving Averages of Stable Processes.

    DTIC Science & Technology

    1976-10-01

    for -::~e contiriuous-tir e case, a result on sa~ n1e :-~~t~i continuity of stable ~rccess~s is obtained. J~.I3 1°70 subject classification: Prinary C...we let ~(x,y) = ~~~~ 6~(x,y) where 6~(x,y) = min(l/3, h(c ~ x,c1y)) and h(c.x~c~y) is the quantity given on p. 113- 115 of Lindvall’s paper, modified...that ~~~~~ < for all k1. The latter part of the condition perhaps needs some motivation. Suppose that a(X) is continuously differentiable , except

  17. Theory of slightly fluctuating ratchets

    NASA Astrophysics Data System (ADS)

    Rozenbaum, V. M.; Shapochkina, I. V.; Lin, S. H.; Trakhtenberg, L. I.

    2017-04-01

    We consider a Brownian particle moving in a slightly fluctuating potential. Using the perturbation theory on small potential fluctuations, we derive a general analytical expression for the average particle velocity valid for both flashing and rocking ratchets with arbitrary, stochastic or deterministic, time dependence of potential energy fluctuations. The result is determined by the Green's function for diffusion in the time-independent part of the potential and by the features of correlations in the fluctuating part of the potential. The generality of the result allows describing complex ratchet systems with competing characteristic times; these systems are exemplified by the model of a Brownian photomotor with relaxation processes of finite duration.

  18. FARMWORKERS, A REPRINT FROM THE 1966 MANPOWER REPORT.

    ERIC Educational Resources Information Center

    Manpower Administration (DOL), Washington, DC.

    ALTHOUGH THE AVERAGE STANDARD OF LIVING OF FARM PEOPLE HAS BEEN RISING STEADILY, THEY CONTINUE TO FACE SEVERE PROBLEMS OF UNDEREMPLOYMENT AND POVERTY. THE AVERAGE PER CAPITA INCOME OF FARM RESIDENTS IS LESS THAN TWO-THIRDS THAT OF THE NONFARM POPULATION. MILLIONS HAVE MOVED TO CITIES, LEAVING STAGNATING RURAL COMMUNITIES, AND INCREASING THE CITY…

  19. Analysis of offshore platforms lifting with fixed pile structure type (fixed platform) based on ASD89

    NASA Astrophysics Data System (ADS)

    Sugianto, Agus; Indriani, Andi Marini

    2017-11-01

    Platform construction GTS (Gathering Testing Sattelite) is offshore construction platform with fix pile structure type/fixed platform functioning to support the mining of petroleum exploitation. After construction fabrication process platform was moved to barges, then shipped to the installation site. Moving process is generally done by pull or push based on construction design determined when planning. But at the time of lifting equipment/cranes available in the work area then the moving process can be done by lifting so that moving activity can be implemented more quickly of work. This analysis moving process of GTS platform in a different way that is generally done to GTS platform types by lifting using problem is construction reinforcement required, so the construction can be moved by lifting with analyzing and checking structure working stress that occurs due to construction moving process by lifting AISC code standard and analysis using the SAP2000 structure analysis program. The analysis result showed that existing condition cannot be moved by lifting because stress ratio is above maximum allowable value that is 0.950 (AISC-ASD89). Overstress occurs on the member 295 and 324 with stress ratio value 0.97 and 0.95 so that it is required structural reinforcement. Box plate aplication at both members so that it produces stress ratio values 0.78 at the member 295 and stress ratio of 0.77 at the member 324. These results indicate that the construction have qualified structural reinforcement for being moved by lifting.

  20. Severe Weather Guide - Mediterranean Ports. 7. Marseille

    DTIC Science & Technology

    1988-03-01

    the afternoon. Upper—level westerlies and the associated storm track is moved northward during summer, so extratropical cyclones and associated...autumn as the extratropical storm track moves southward. Precipitation amount is the highest of the year, with an average of 3 inches (76 mm) for the...18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Marseille port

  1. Polymer Coatings Degradation Properties

    DTIC Science & Technology

    1985-02-01

    undertaken 124). The Box-Jenkins approach first evaluates the partial auto -correlation function and determines the order of the moving average memory function...78 - Tables 15 and 16 show the resalit- f- a, the partial auto correlation plots. Second order moving .-. "ra ;;th -he appropriate lags were...coated films. Kaempf, Guenter; Papenroth, Wolfgang; Kunststoffe Date: 1982 Volume: 72 Number:7 Pages: 424-429 Parameters influencing the accelerated

  2. Simulation of Unsteady Flows Using an Unstructured Navier-Stokes Solver on Moving and Stationary Grids

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.

    2005-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.

  3. Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.

    PubMed

    Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A

    2016-01-26

    Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.

  4. Associations between Changes in City and Address Specific Temperature and QT Interval - The VA Normative Aging Study

    PubMed Central

    Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel

    2014-01-01

    Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150

  5. False memory for context activates the parahippocampal cortex.

    PubMed

    Karanian, Jessica M; Slotnick, Scott D

    2014-01-01

    Previous studies have reported greater activity in the parahippocampal cortex during true memory than false memory, which has been interpreted as reflecting greater sensory processing during true memory. However, in these studies, sensory detail and contextual information were confounded. In the present fMRI study, we employed a novel paradigm to dissociate these factors. During encoding, abstract shapes were presented in one of two contexts (i.e., moving or stationary). During retrieval, participants classified shapes as previously "moving" or "stationary." Critically, contextual processing was relatively greater during false memory ("moving" responses to stationary items), while sensory processing was relatively greater during true memory ("moving" responses to moving items). Within the medial temporal lobe, false memory versus true memory produced greater activity in the parahippocampal cortex, whereas true memory versus false memory produced greater activity in the hippocampus. The present results indicate that the parahippocampal cortex mediates contextual processing rather than sensory processing.

  6. Comparative Analysis on Nonlinear Models for Ron Gasoline Blending Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Aguilera, R. Carreño; Yu, Wen; Rodríguez, J. C. Tovar; Mosqueda, M. Elena Acevedo; Ortiz, M. Patiño; Juarez, J. J. Medel; Bautista, D. Pacheco

    The blending process always being a nonlinear process is difficult to modeling, since it may change significantly depending on the components and the process variables of each refinery. Different components can be blended depending on the existing stock, and the chemical characteristics of each component are changing dynamically, they all are blended until getting the expected specification in different properties required by the customer. One of the most relevant properties is the Octane, which is difficult to control in line (without the component storage). Since each refinery process is quite different, a generic gasoline blending model is not useful when a blending in line wants to be done in a specific process. A mathematical gasoline blending model is presented in this paper for a given process described in state space as a basic gasoline blending process description. The objective is to adjust the parameters allowing the blending gasoline model to describe a signal in its trajectory, representing in neural networks extreme learning machine method and also for nonlinear autoregressive-moving average (NARMA) in neural networks method, such that a comparative work be developed.

  7. Multiscale analysis of information dynamics for linear multivariate processes.

    PubMed

    Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele

    2016-08-01

    In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.

  8. Feasibility Study of the Geotextile Waste Filtration Unit.

    DTIC Science & Technology

    2000-02-10

    Treatment Module 3-32 Figure 3-20. THE SCHEMATIC OF THE MOVING BED BIOFILM REACTOR ( MBBR ) 3൪ Figure 4-1. The Original Distributed Concept for WFUs...Moving Bed Biofilm Reactor ( MBBR ) process appears to be one of the most feasible processes available to meet Force Provider liquid waste stream...Moving Bed Biofilm Reactor ( MBBR ) process was then examined.31 In this system, both activated sludge and fixed-film processes occur in a bioreactor

  9. A life cycle assessment of environmental performances of two combustion- and gasification-based waste-to-energy technologies.

    PubMed

    Arena, Umberto; Ardolino, Filomena; Di Gregorio, Fabrizio

    2015-07-01

    An attributional life cycle analysis (LCA) was developed to compare the environmental performances of two waste-to-energy (WtE) units, which utilize the predominant technologies among those available for combustion and gasification processes: a moving grate combustor and a vertical shaft gasifier coupled with direct melting. The two units were assumed to be fed with the same unsorted residual municipal waste, having a composition estimated as a European average. Data from several plants in operation were processed by means of mass and energy balances, and on the basis of the flows and stocks of materials and elements inside and throughout the two units, as provided by a specific substance flow analysis. The potential life cycle environmental impacts related to the operations of the two WtE units were estimated by means of the Impact 2002+ methodology. They indicate that both the technologies have sustainable environmental performances, but those of the moving grate combustion unit are better for most of the selected impact categories. The analysis of the contributions from all the stages of each specific technology suggests where improvements in technological solutions and management criteria should be focused to obtain further and remarkable environmental improvements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Deep Downhole Seismic Testing at the Waste Treatment Plant Site, Hanford, WA. Volume IV S-Wave Measurements in Borehole C4993 Seismic Records, Wave-Arrival Identifications and Interpreted S-Wave Velocity Profile.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stokoe, Kenneth H.; Li, Song Cheng; Cox, Brady R.

    2007-06-06

    In this volume (IV), all S-wave measurements are presented that were performed in Borehole C4993 at the Waste Treatment Plant (WTP) with T-Rex as the seismic source and the Lawrence Berkeley National Laboratory (LBNL) 3-D wireline geophone as the at-depth borehole receiver. S-wave measurements were performed over the depth range of 370 to 1300 ft, typically in 10-ft intervals. However, in some interbeds, 5-ft depth intervals were used, while below about 1200 ft, depth intervals of 20 ft were used. Shear (S) waves were generated by moving the base plate of T-Rex for a given number of cycles at amore » fixed frequency as discussed in Section 2. This process was repeated so that signal averaging in the time domain was performed using 3 to about 15 averages, with 5 averages typically used. In addition, a second average shear wave record was recorded by reversing the polarity of the motion of the T-Rex base plate. In this sense, all the signals recorded in the field were averaged signals. In all cases, the base plate was moving perpendicular to a radial line between the base plate and the borehole which is in and out of the plane of the figure shown in Figure 1.1. The definition of “in-line”, “cross-line”, “forward”, and “reversed” directions in items 2 and 3 of Section 2 was based on the moving direction of the base plate. In addition to the LBNL 3-D geophone, called the lower receiver herein, a 3-D geophone from Redpath Geophysics was fixed at a depth of 22 ft in Borehole C4993, and a 3-D geophone from the University of Texas (UT) was embedded near the borehole at about 1.5 ft below the ground surface. The Redpath geophone and the UT geophone were properly aligned so that one of the horizontal components in each geophone was aligned with the direction of horizontal shaking of the T-Rex base plate. This volume is organized into 12 sections as follows. Section 1: Introduction, Section 2: Explanation of Terminology, Section 3: Vs Profile at Borehole C4993, Sections 4 to 6: Unfiltered S-wave records of lower horizontal receiver, reaction mass, and reference receiver, respectively, Sections 7 to 9: Filtered S-wave signals of lower horizontal receiver, reaction mass and reference receiver, respectively, Section 10: Expanded and filtered S-wave signals of lower horizontal receiver, and Sections 11 and 12: Waterfall plots of unfiltered and filtered lower horizontal receiver signals, respectively.« less

  11. Elastic facial movement influences part-based but not holistic processing

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Ge, Liezhong; Lee, Kang

    2013-01-01

    Face processing has been studied for decades. However, most of the empirical investigations have been conducted using static face images as stimuli. Little is known about whether static face processing findings can be generalized to real world contexts, in which faces are constantly moving. The present study investigates the nature of face processing (holistic vs. part-based) in elastic moving faces. Specifically, we focus on whether elastic moving faces, as compared to static ones, can facilitate holistic or part-based face processing. Using the composite paradigm, participants were asked to remember either an elastic moving face (i.e., a face that blinks and chews) or a static face, and then tested with a static composite face. The composite effect was (1) significantly smaller in the dynamic condition than in the static condition, (2) consistently found with different face encoding times (Experiments 1–3), and (3) present for the recognition of both upper and lower face parts (Experiment 4). These results suggest that elastic facial motion facilitates part-based processing, rather than holistic processing. Thus, while previous work with static faces has emphasized an important role for holistic processing, the current work highlights an important role for featural processing with moving faces. PMID:23398253

  12. Hybrid activated sludge/biofilm process for the treatment of municipal wastewater in a cold climate region: a case study.

    PubMed

    Di Trapani, Daniele; Christensso, Magnus; Odegaard, Hallvard

    2011-01-01

    A hybrid activated sludge/biofilm process was investigated for wastewater treatment in a cold climate region. This process, which contains both suspended biomass and biofilm, usually referred as IFAS process, is created by introducing plastic elements as biofilm carrier media into a conventional activated sludge reactor. In the present study, a hybrid process, composed of an activated sludge and a moving bed biofilm reactor was used. The aim of this paper has been to investigate the performances of a hybrid process, and in particular to gain insight the nitrification process, when operated at relatively low MLSS SRT and low temperatures. The results of a pilot-scale study carried out at the Department of Hydraulic and Environmental Engineering at the Norwegian University of Science and Technology in Trondheim are presented. The experimental campaign was divided into two periods. The pilot plant was first operated with a constant HRT of 4.5 hours, while in the second period the influent flow was increased so that HRT was 3.5 hours. The average temperature was near 11.5°C in the overall experimental campaign. The average mixed liquor SRT was 5.7 days. Batch tests on both carriers and suspended biomass were performed in order to evaluate the nitrification rate of the two different biomasses. The results demonstrated that this kind of reactor can efficiently be used for the upgrading of conventional activated sludge plant for achieving year-round nitrification, also in presence of low temperatures, and without the need of additional volumes.

  13. Tracking Movements of Individual Anoplophora glabripennis (Coleoptera: Cerambycidae) Adults: Application of Harmonic Radar

    Treesearch

    David W. Williams; Guohong Li; Ruitong Gao

    2004-01-01

    Movements of 55 Anoplophora glabripennis (Motschulsky) adults were monitored on 200 willow trees, Salix babylonica L., at a site appx. 80 km southeast of Beijing, China, for 9-14 d in an individual mark-recapture study using harmonic radar. The average movement distance was appx. 14 m, with many beetles not moving at all and others moving >90 m. The rate of movement...

  14. Beyond Horse Race Comparisons of National Performance Averages: Math Performance Variation within and between Classrooms in 38 Countries

    ERIC Educational Resources Information Center

    Huang, Min-Hsiung

    2009-01-01

    Reports of international studies of student achievement often receive public attention worldwide. However, this attention overly focuses on the national rankings of average student performance. To move beyond the simplistic comparison of national mean scores, this study investigates (a) country differences in the measures of variability as well as…

  15. Drainage evolution in the debris avalanche deposits near Mount Saint Helens, Washington

    NASA Technical Reports Server (NTRS)

    Beach, G. L.; Dzurisin, D.

    1984-01-01

    The 18 May 1980 eruption of Mount St. Helens was initiated by a massive rockslide-debris avalanche which completely transformed the upper 25 km of the North Fork Toutle River valley. The debris was generated by one of the largest gravitational mass movements ever recorded on Earth. Moving at an average velocity of 35 m/s, the debris avalanche buried approximately 60 sq km of terrain to an average depth of 45 m with unconsolidated, poorly sorted volcaniclastic material, all within a period of 10 minutes. Where exposed and unaltered by subsequent lahars and pyroclastic flows, the new terrain surface was characterized predominantly by hummocks, closed depressions, and the absence of an identifiable channel network. Following emplacement of the debris avalanche, a complex interrelationship of fluvial and mass wasting processes immediately began operating to return the impacted area to an equilibrium status through the removal of material (potential energy) and re-establishment of graded conditions. In an attempt to chronicle the morphologic evolution of this unique environmental setting, a systematic series of interpretative maps of several selected areas was produced. These maps, which document the rate and character of active geomorphic processes, are discussed.

  16. Percentiles of the run-length distribution of the Exponentially Weighted Moving Average (EWMA) median chart

    NASA Astrophysics Data System (ADS)

    Tan, K. L.; Chong, Z. L.; Khoo, M. B. C.; Teoh, W. L.; Teh, S. Y.

    2017-09-01

    Quality control is crucial in a wide variety of fields, as it can help to satisfy customers’ needs and requirements by enhancing and improving the products and services to a superior quality level. The EWMA median chart was proposed as a useful alternative to the EWMA \\bar{X} chart because the median-type chart is robust against contamination, outliers or small deviation from the normality assumption compared to the traditional \\bar{X}-type chart. To provide a complete understanding of the run-length distribution, the percentiles of the run-length distribution should be investigated rather than depending solely on the average run length (ARL) performance measure. This is because interpretation depending on the ARL alone can be misleading, as the process mean shifts change according to the skewness and shape of the run-length distribution, varying from almost symmetric when the magnitude of the mean shift is large, to highly right-skewed when the process is in-control (IC) or slightly out-of-control (OOC). Before computing the percentiles of the run-length distribution, optimal parameters of the EWMA median chart will be obtained by minimizing the OOC ARL, while retaining the IC ARL at a desired value.

  17. Time-series modeling and prediction of global monthly absolute temperature for environmental decision making

    NASA Astrophysics Data System (ADS)

    Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun

    2013-03-01

    A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.

  18. Using lean Six Sigma to improve hospital based outpatient imaging satisfaction.

    PubMed

    McDonald, Angelic P; Kirk, Randy

    2013-01-01

    Within the hospital based imaging department at Methodist Willowbrook, outpatient, inpatient, and emergency patients are all performed on the same equipment with the same staff. The critical nature of the patient is the deciding factor as to who gets done first and in what order procedures are performed. After an aggressive adoption of Intentional Tools, the imaging department was finally able to move from a two year mean Press Ganey, outpatient satisfaction average score of 91.2 and UHC percentile ranking of 37th to a mean average of 92.1 and corresponding UHC ranking of 60th percentile. It was at the 60th percentile ranking that the department flat lined. Using the Six Sigma DMAIC process, opportunity for further improvement was identified. A two week focus pilot was conducted specifically on areas identified through the Six Sigma process. The department was able to jump to 88th percentile ranking and a mean of 93.7. With pay for performance focusing on outpatient satisfaction and a financial incentive to improving and maintaining the highest scores, it was important to know where the imaging department should apply its financial resources to obtain the greatest impact.

  19. Locomotion of microorganisms near a no-slip boundary in a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Yazdi, Shahrzad; Ardekani, Arezoo M.; Borhan, Ali

    2014-10-01

    Locomotion of microorganisms plays a vital role in most of their biological processes. In many of these processes, microorganisms are exposed to complex fluids while swimming in confined domains, such as spermatozoa in mucus of mammalian reproduction tracts or bacteria in extracellular polymeric matrices during biofilm formation. Thus, it is important to understand the kinematics of propulsion in a viscoelastic fluid near a no-slip boundary. We use a squirmer model with a time-reversible body motion to analytically investigate the swimming kinematics in an Oldroyd-B fluid near a wall. Analysis of the time-averaged motion of the swimmer shows that both pullers and pushers in a viscoelastic fluid swim towards the no-slip boundary if they are initially located within a small domain of "attraction" in the vicinity of the wall. In contrast, neutral swimmers always move towards the wall regardless of their initial distance from the wall. Outside the domain of attraction, pullers and pushers are both repelled from the no-slip boundary. Time-averaged locomotion is most pronounced at a Deborah number of unity. We examine the swimming trajectories of different types of swimmers as a function of their initial orientation and distance from the no-slip boundary.

  20. Detection of small earthquakes with dense array data: example from the San Jacinto fault zone, southern California

    NASA Astrophysics Data System (ADS)

    Meng, Haoran; Ben-Zion, Yehuda

    2018-01-01

    We present a technique to detect small earthquakes not included in standard catalogues using data from a dense seismic array. The technique is illustrated with continuous waveforms recorded in a test day by 1108 vertical geophones in a tight array on the San Jacinto fault zone. Waveforms are first stacked without time-shift in nine non-overlapping subarrays to increase the signal-to-noise ratio. The nine envelope functions of the stacked records are then multiplied with each other to suppress signals associated with sources affecting only some of the nine subarrays. Running a short-term moving average/long-term moving average (STA/LTA) detection algorithm on the product leads to 723 triggers in the test day. Using a local P-wave velocity model derived for the surface layer from Betsy gunshot data, 5 s long waveforms of all sensors around each STA/LTA trigger are beamformed for various incident directions. Of the 723 triggers, 220 are found to have localized energy sources and 103 of these are confirmed as earthquakes by verifying their observation at 4 or more stations of the regional seismic network. This demonstrates the general validity of the method and allows processing further the validated events using standard techniques. The number of validated events in the test day is >5 times larger than that in the standard catalogue. Using these events as templates can lead to additional detections of many more earthquakes.

  1. Dengue forecasting in São Paulo city with generalized additive models, artificial neural networks and seasonal autoregressive integrated moving average models.

    PubMed

    Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco

    2018-01-01

    Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.

  2. Rapid and safe learning of robotic gastrectomy for gastric cancer: multidimensional analysis in a comparison with laparoscopic gastrectomy.

    PubMed

    Kim, H-I; Park, M S; Song, K J; Woo, Y; Hyung, W J

    2014-10-01

    The learning curve of robotic gastrectomy has not yet been evaluated in comparison with the laparoscopic approach. We compared the learning curves of robotic gastrectomy and laparoscopic gastrectomy based on operation time and surgical success. We analyzed 172 robotic and 481 laparoscopic distal gastrectomies performed by single surgeon from May 2003 to April 2009. The operation time was analyzed using a moving average and non-linear regression analysis. Surgical success was evaluated by a cumulative sum plot with a target failure rate of 10%. Surgical failure was defined as laparoscopic or open conversion, insufficient lymph node harvest for staging, resection margin involvement, postoperative morbidity, and mortality. Moving average and non-linear regression analyses indicated stable state for operation time at 95 and 121 cases in robotic gastrectomy, and 270 and 262 cases in laparoscopic gastrectomy, respectively. The cumulative sum plot identified no cut-off point for surgical success in robotic gastrectomy and 80 cases in laparoscopic gastrectomy. Excluding the initial 148 laparoscopic gastrectomies that were performed before the first robotic gastrectomy, the two groups showed similar number of cases to reach steady state in operation time, and showed no cut-off point in analysis of surgical success. The experience of laparoscopic surgery could affect the learning process of robotic gastrectomy. An experienced laparoscopic surgeon requires fewer cases of robotic gastrectomy to reach steady state. Moreover, the surgical outcomes of robotic gastrectomy were satisfactory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Simulations of moving effect of coastal vegetation on tsunami damping

    NASA Astrophysics Data System (ADS)

    Tsai, Ching-Piao; Chen, Ying-Chi; Octaviani Sihombing, Tri; Lin, Chang

    2017-05-01

    A coupled wave-vegetation simulation is presented for the moving effect of the coastal vegetation on tsunami wave height damping. The problem is idealized by solitary wave propagation on a group of emergent cylinders. The numerical model is based on general Reynolds-averaged Navier-Stokes equations with renormalization group turbulent closure model by using volume of fluid technique. The general moving object (GMO) model developed in computational fluid dynamics (CFD) code Flow-3D is applied to simulate the coupled motion of vegetation with wave dynamically. The damping of wave height and the turbulent kinetic energy along moving and stationary cylinders are discussed. The simulated results show that the damping of wave height and the turbulent kinetic energy by the moving cylinders are clearly less than by the stationary cylinders. The result implies that the wave decay by the coastal vegetation may be overestimated if the vegetation was represented as stationary state.

  4. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  5. A monitoring tool for performance improvement in plastic surgery at the individual level.

    PubMed

    Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J

    2013-05-01

    The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.

  6. Optimization and validation of moving average quality control procedures using bias detection curves and moving average validation charts.

    PubMed

    van Rossum, Huub H; Kemperman, Hans

    2017-02-01

    To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.

  7. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  8. Fate of microplastics and mesoplastics carried by surface currents and wind waves: A numerical model approach in the Sea of Japan.

    PubMed

    Iwasaki, Shinsuke; Isobe, Atsuhiko; Kako, Shin'ichiro; Uchida, Keiichi; Tokai, Tadashi

    2017-08-15

    A numerical model was established to reproduce the oceanic transport processes of microplastics and mesoplastics in the Sea of Japan. A particle tracking model, where surface ocean currents were given by a combination of a reanalysis ocean current product and Stokes drift computed separately by a wave model, simulated particle movement. The model results corresponded with the field survey. Modeled results indicated the micro- and mesoplastics are moved northeastward by the Tsushima Current. Subsequently, Stokes drift selectively moves mesoplastics during winter toward the Japanese coast, resulting in increased contributions of mesoplastics south of 39°N. Additionally, Stokes drift also transports micro- and mesoplastics out to the sea area south of the subpolar front where the northeastward Tsushima Current carries them into the open ocean via the Tsugaru and Soya straits. Average transit time of modeled particles in the Sea of Japan is drastically reduced when including Stokes drift in the model. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. The Hurst exponent in energy futures prices

    NASA Astrophysics Data System (ADS)

    Serletis, Apostolos; Rosenberg, Aryeh Adam

    2007-07-01

    This paper extends the work in Elder and Serletis [Long memory in energy futures prices, Rev. Financial Econ., forthcoming, 2007] and Serletis et al. [Detrended fluctuation analysis of the US stock market, Int. J. Bifurcation Chaos, forthcoming, 2007] by re-examining the empirical evidence for random walk type behavior in energy futures prices. In doing so, it uses daily data on energy futures traded on the New York Mercantile Exchange, over the period from July 2, 1990 to November 1, 2006, and a statistical physics approach-the ‘detrending moving average’ technique-providing a reliable framework for testing the information efficiency in financial markets as shown by Alessio et al. [Second-order moving average and scaling of stochastic time series, Eur. Phys. J. B 27 (2002) 197-200] and Carbone et al. [Time-dependent hurst exponent in financial time series. Physica A 344 (2004) 267-271; Analysis of clusters formed by the moving average of a long-range correlated time series. Phys. Rev. E 69 (2004) 026105]. The results show that energy futures returns display long memory and that the particular form of long memory is anti-persistence.

  10. Behavior and Frequency Analysis of Aurelia aurita by Using in situ Target Strength at a Port in Southwestern Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Eun-A.; Hwang, Doo-Jin; Chae, Jinho; Yoon, Won Duk; Lee, Kyounghoon

    2018-03-01

    This study was carried out to determine the in situ target strength and behavioral characteristics of moon jellyfish ( Aurelia aurita) using two frequencies (38 and 120 kHz) that present a 2- frequency-difference method for distinguishing A. aurita from other marine planktonic organisms. The average TS was shown as -71.9 -67.9 dB at 38 kHz and -75.5 -66.0 dB at 120 kHz and the average ΔMVBS120-38 kHz was similar at -1.5 3.5 dB. The TS values varied in a range of about 14 dB from -83.3 and -69.0 dB depending on the pulsation of A. aurita. The species moved in a range of -0.1 1.0 m and they mostly moved horizontally with moving speeds of 0.3 0.6 m·s-1. The TS and behavioral characteristics of A. aurita can distinguish the species from others. The acoustic technology can also contribute to understanding the distribution and abundance of the species.

  11. Environmental Assessment: Installation Development at Sheppard Air Force Base, Texas

    DTIC Science & Technology

    2007-05-01

    column, or in topographic depressions. Water is then utilized by plants and is respired, or it moves slowly into groundwater and/or eventually to surface...water bodies where it slowly moves through the hydrologic cycle. Removal of vegetation decreases infiltration into the soil column and thereby...School District JP-4 jet propulsion fuel 4 kts knots Ldn Day- Night Average Sound Level Leq equivalent noise level Lmax maximum sound level lb pound

  12. Quality initiatives: improving patient flow for a bone densitometry practice: results from a Mayo Clinic radiology quality initiative.

    PubMed

    Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K

    2010-03-01

    Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.

  13. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  14. Two models for identification and predicting behaviour of an induction motor system

    NASA Astrophysics Data System (ADS)

    Kuo, Chien-Hsun

    2018-01-01

    System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.

  15. Nonmetric test of the minimax theory of two-person zerosum games.

    PubMed Central

    O'Neill, B

    1987-01-01

    As an experimental test of the minimax theory for two-person zerosum games, subjects played a game that was especially easy for them to understand and whose minimax-prescribed solution did not depend on quantitative assumptions about their utilities for money. Players' average relative frequencies for the moves and their proportions of wins were almost exactly as predicted by minimax, but subject-to-subject variability was too high. These results suggest that people can deviate somewhat from minimax play since their opponents have limited information-processing ability and are imperfect record keepers, but they do not stray so far that the difference will be noticed and their own payoffs will be diminished. PMID:3470781

  16. Efficiency and multifractality analysis of CSI 300 based on multifractal detrending moving average algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao

    2013-03-01

    We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.

  17. Gauging the Nearness and Size of Cycle Maximum

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2003-01-01

    A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.

  18. An algorithm for testing the efficient market hypothesis.

    PubMed

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).

  19. An Algorithm for Testing the Efficient Market Hypothesis

    PubMed Central

    Boboc, Ioana-Andreea; Dinică, Mihai-Cristian

    2013-01-01

    The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148

  20. Air quality at night markets in Taiwan.

    PubMed

    Zhao, Ping; Lin, Chi-Chi

    2010-03-01

    In Taiwan, there are more than 300 night markets and they have attracted more and more visitors in recent years. Air quality in night markets has become a public concern. To characterize the current air quality in night markets, four major night markets in Kaohsiung were selected for this study. The results of this study showed that the mean carbon dioxide (CO2) concentrations at fixed and moving sites in night markets ranged from 326 to 427 parts per million (ppm) during non-open hours and from 433 to 916 ppm during open hours. The average carbon monoxide (CO) concentrations at fixed and moving sites in night markets ranged from 0.2 to 2.8 ppm during non-open hours and from 2.1 to 14.1 ppm during open hours. The average 1-hr levels of particulate matter with aerodynamic diameters less than 10 microm (PM10) and less than 2.5 microm (PM2.5) at fixed and moving sites in night markets were high, ranging from 186 to 451 microg/m3 and from 175 to 418 microg/m3, respectively. The levels of PM2.5 accounted for 80-97% of their respective PM10 concentrations. The average formaldehyde (HCHO) concentrations at fixed and moving sites in night markets ranged from 0 to 0.05 ppm during non-open hours and from 0.02 to 0.27 ppm during open hours. The average concentration of individual polycyclic aromatic hydrocarbons (PAHs) was found in the range of 0.09 x 10(4) to 1.8 x 10(4) ng/m3. The total identified PAHs (TIPs) ranged from 7.8 x 10(1) to 20 x 10(1) ng/m3 during non-open hours and from 1.5 x 10(4) to 4.0 x 10(4) ng/m3 during open hours. Of the total analyzed PAHs, the low-molecular-weight PAHs (two to three rings) were the dominant species, corresponding to an average of 97% during non-open hours and 88% during open hours, whereas high-molecular-weight PAHs (four to six rings) represented 3 and 12% of the total detected PAHs in the gas phase during non-open and open hours, respectively.

  1. Using Point Clouds Generated from Unmanned Aerial Vehicles Imagery Processed with Structure from Motion to Address Tsunami vs Storm Wave Boulder Deposition in Watu Karung, Indonesia

    NASA Astrophysics Data System (ADS)

    Uribe, A. T.; Bunds, M. P.; Andreini, J.; Horns, D. M.; Harris, R. A.; Prasetyadi, C.; Yulianto, E.; Putra, P. S.

    2017-12-01

    Tsunamis pose a major hazard to coastal communities along the south coast of much of Indonesia due its location on the Australian-Sunda arc. Furthermore, tsunamis and high-energy wave events are the principal drivers of geomorphic change in the area and it is difficult to distinguish the effects of each. A potentially useful indicator of past tsunami activity is coastal imbricated boulder deposits. To address whether an imbricated boulder deposit located on a beach in Watu Karung (Java, Indonesia) could have been formed by non-tsunami wave activity and to investigate coastal geomorphic change, we generated three pairs of digital surface models (DSMs) over an approximately one year period using photographs taken from a small unmanned aerial vehicle and structure-from-motion photogrammetry. The first two DSMs were made from photographs taken on 7/30-31/2016 and 8/2/2016, immediately before and after a significant 4.2 m swell struck the beach during a +2.5 m spring high tide. The third DSM pair was made from imagery collected 7/12/2017. Each pair of DSMs consists of a 1 cm pixel DSM of the boulder deposit and a 4 cm DSM of the larger beach area that surrounds the boulders. In addition, prior to the 2016 wave event 21 boulders up to 75 kg were marked and hand-placed shoreward of the boulder deposit; their movement was tracked with RTK GPS measurements. In the 2016 wave event, every hand-placed boulder moved, with an average displacement of 27.6 m. At the same time, approximately 20 of 650 naturally - occurring boulders, up to 2 m in length, moved more than 10 cm and up to 5.6 m. Between 2016 and 2017, approximately 300 of 650 naturally - occurring boulders with an average length of 1.6 m moved varying distances of at least 10 cm and up to 30 m. In addition, changes in beach sand volume occurred in ten 25 m2 localized zones on the beach with an average volume change of approximately 65 m2. Changes in both boulder position and sand volume occurred during the 2016 to 2017 time period when no tsunamis affected Watu Karung—thus indicating that all changes were the result of storm wave events.

  2. Theoretical results on fractionally integrated exponential generalized autoregressive conditional heteroskedastic processes

    NASA Astrophysics Data System (ADS)

    Lopes, Sílvia R. C.; Prass, Taiane S.

    2014-05-01

    Here we present a theoretical study on the main properties of Fractionally Integrated Exponential Generalized Autoregressive Conditional Heteroskedastic (FIEGARCH) processes. We analyze the conditions for the existence, the invertibility, the stationarity and the ergodicity of these processes. We prove that, if { is a FIEGARCH(p,d,q) process then, under mild conditions, { is an ARFIMA(q,d,0) with correlated innovations, that is, an autoregressive fractionally integrated moving average process. The convergence order for the polynomial coefficients that describes the volatility is presented and results related to the spectral representation and to the covariance structure of both processes { and { are discussed. Expressions for the kurtosis and the asymmetry measures for any stationary FIEGARCH(p,d,q) process are also derived. The h-step ahead forecast for the processes {, { and { are given with their respective mean square error of forecast. The work also presents a Monte Carlo simulation study showing how to generate, estimate and forecast based on six different FIEGARCH models. The forecasting performance of six models belonging to the class of autoregressive conditional heteroskedastic models (namely, ARCH-type models) and radial basis models is compared through an empirical application to Brazilian stock market exchange index.

  3. Nonconscious memory for motion activates MT+.

    PubMed

    Thakral, Preston P; Slotnick, Scott D

    2014-11-12

    Extrastriate region MT+ is widely thought to reflect conscious motion processing. The primary aim of the present functional MRI study was to assess whether MT+ is activated during nonconscious memory for motion. During the encoding phase, moving and stationary abstract shapes were presented to the left or right of fixation. During the retrieval phase, the same shapes were presented at fixation and participants classified each shape as 'moving-left', 'moving-right', 'stationary-left', or 'stationary-right'. The contrast of moving>stationary shapes at encoding was used to identify the location of MT+. Event-related activity was then extracted from MT+ within each hemisphere. MT+ activity was significantly greater for moving-misses than for stationary-misses, which indicates that nonconscious memory for motion activates MT+. Furthermore, nonconscious memory activity (moving-misses) had an earlier temporal onset than conscious memory activity (moving-hits). The present results are the first, to our knowledge, to demonstrate that MT+ is associated with nonconscious motion processing. Therefore, activity in this region or in other visual-sensory regions should not be assumed to reflect conscious processing.

  4. Evidence for the Effectiveness of Jungian Psychotherapy: A Review of Empirical Studies

    PubMed Central

    Roesler, Christian

    2013-01-01

    Since the 1990s several research projects and empirical studies (process and outcome) on Jungian Psychotherapy have been conducted mainly in Germany and Switzerland. Prospective, naturalistic outcome studies and retrospective studies using standardized instruments and health insurance data as well as several qualitative studies of aspects of the psychotherapeutic process will be summarized. The studies are diligently designed and the results are well applicable to the conditions of outpatient practice. All the studies show significant improvements not only on the level of symptoms and interpersonal problems, but also on the level of personality structure and in every day life conduct. These improvements remain stable after completion of therapy over a period of up to six years. Several studies show further improvements after the end of therapy, an effect which psychoanalysis has always claimed. Health insurance data show that, after Jungian therapy, patients reduce health care utilization to a level even below the average of the total population. Results of several studies show that Jungian treatment moves patients from a level of severe symptoms to a level where one can speak of psychological health. These significant changes are reached by Jungian therapy with an average of 90 sessions, which makes Jungian psychotherapy an effective and cost-effective method. Process studies support Jungian theories on psychodynamics and elements of change in the therapeutic process. So finally, Jungian psychotherapy has reached the point where it can be called an empirically proven, effective method. PMID:25379256

  5. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  6. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  7. Rate of Oviposition by Culex Quinquefasciatus in San Antonio, Texas, During Three Years

    DTIC Science & Technology

    1988-09-01

    autoregression and zero orders of integration and moving average ( ARIMA (l,O,O)). This model was chosen initially because rainfall ap- peared to...have no trend requiring integration and no obvious requirement for a moving aver- age component (i.e., no regular periodicity). This ARIMA model was...Say in both the northern and southern hem- ispheres exposes this species to a variety of climatic challenges to its survival. It is able to adjust

  8. Seminar Proceedings Implementation of Nonstructural Measures Held at Ft. Belvoir, Virginia on 15, 16 and 17 November 1983

    DTIC Science & Technology

    1983-11-01

    S-Approximate Household inventory item average chance of being moved (%) High Electric toaster Vacuum cleaner 80 Colour television Medium Record...most rtadily moved are small items of electrical. I equipment and valuable items such as colour televisions. However, many respondents reported that...WESSEX WATER AUTHORITY, "Somerset Land Drainage District, land drainage sur ey report", Wessex Water Authority, Bridgwater, England, 1979. .34 "* • I.U

  9. Tools to Develop or Convert MOVES Inputs

    EPA Pesticide Factsheets

    The following tools are designed to help users develop inputs to MOVES and post-process the output. With the release of MOVES2014, EPA strongly encourages state and local agencies to develop local inputs based on MOVES fleet and activity categories.

  10. "It was five years of hell": Parental experiences of navigating and processing the slow and arduous time to pediatric resective epilepsy surgery.

    PubMed

    Pieters, Huibrie C; Iwaki, Tomoko; Vickrey, Barbara G; Mathern, Gary W; Baca, Christine B

    2016-09-01

    Children with medically refractory epilepsy stand to benefit from surgery and live a life free of seizures. However, a large proportion of potentially eligible children do not receive a timely referral for a surgical evaluation. We aimed to describe experiences during the arduous time before the referral and the parent-reported facilitators that helped them move forward through this slow time. Individual semi-structured interviews with 37 parents of children who had previously undergone epilepsy surgery at UCLA (2006-2011) were recorded, transcribed, and systematically analyzed by two independent coders using thematic analysis. Clinical data were extracted from medical records. Parents, 41.3years of age on average, were mostly Caucasian, English-speaking, mothers, married, and employed. The mean age at surgery for children was 8.2years with a mean time from epilepsy onset to surgery of 5.4years. Parental decision-making was facilitated when parents eventually received a presurgical referral and navigated to a multidisciplinary team that they trusted to care for their child with medically refractory epilepsy. Four themes described the experiences that parents used to feel a sense of moving forward. The first theme, processing, involved working through feelings and was mostly done alone. The second theme, navigating the complex unknowns of the health-care system, was more active and purposeful. Processing co-occurred with navigating in a fluid intersection, the third theme, which was evidenced by deliberate actions. The fourth theme, facilitators, explained helpful ways of processing and navigating; parents utilized these mechanisms to turn vulnerable times following the distress of their child's diagnosis into an experience of productivity. To limit parental distress and remediate the slow and arduous journey to multidisciplinary care at a comprehensive epilepsy center for a surgical evaluation, we suggest multi-pronged interventions to modify barriers associated with parents, providers, and health-care systems. Based on the facilitators that moved parents of our sample forward, we provide practical suggestions such as increased peer support, developing the role of patient navigators and communication strategies with parents before, during, and after referral to a comprehensive epilepsy center and presurgical evaluation. Published by Elsevier Inc.

  11. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  12. Plans, Patterns, and Move Categories Guiding a Highly Selective Search

    NASA Astrophysics Data System (ADS)

    Trippen, Gerhard

    In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.

  13. Reflections on the history of pre-mRNA processing and highlights of current knowledge: A unified picture

    PubMed Central

    Darnell, James E.

    2013-01-01

    Several strong conclusions emerge concerning pre-mRNA processing from both old and newer experiments. The RNAPII complex is involved with pre-mRNA processing through binding of processing proteins to the CTD (carboxyl terminal domain) of the largest RNAPII subunit. These interactions are necessary for efficient processing, but whether factor binding to the CTD and delivery to splicing sites is obligatory or facilitatory is unsettled. Capping, addition of an m7Gppp residue (cap) to the initial transcribed residue of a pre-mRNA, occurs within seconds. Splicing of pre-mRNA by spliceosomes at particular sites is most likely committed during transcription by the binding of initiating processing factors and ∼50% of the time is completed in mammalian cells before completion of the primary transcript. This fact has led to an outpouring in the literature about “cotranscriptional splicing.” However splicing requires several minutes for completion and can take longer. The RNAPII complex moves through very long introns and also through regions dense with alternating exons and introns at an average rate of ∼3 kb per min and is, therefore, not likely detained at each splice site for more than a few seconds, if at all. Cleavage of the primary transcript at the 3′ end and polyadenylation occurs within 30 sec or less at recognized polyA sites, and the majority of newly polyadenylated pre-mRNA molecules are much larger than the average mRNA. Finally, it seems quite likely that the nascent RNA most often remains associated with the chromosomal locus being transcribed until processing is complete, possibly acquiring factors related to the transport of the new mRNA to the cytoplasm. PMID:23440351

  14. Development of S-ARIMA Model for Forecasting Demand in a Beverage Supply Chain

    NASA Astrophysics Data System (ADS)

    Mircetic, Dejan; Nikolicic, Svetlana; Maslaric, Marinko; Ralevic, Nebojsa; Debelic, Borna

    2016-11-01

    Demand forecasting is one of the key activities in planning the freight flows in supply chains, and accordingly it is essential for planning and scheduling of logistic activities within observed supply chain. Accurate demand forecasting models directly influence the decrease of logistics costs, since they provide an assessment of customer demand. Customer demand is a key component for planning all logistic processes in supply chain, and therefore determining levels of customer demand is of great interest for supply chain managers. In this paper we deal with exactly this kind of problem, and we develop the seasonal Autoregressive IntegratedMoving Average (SARIMA) model for forecasting demand patterns of a major product of an observed beverage company. The model is easy to understand, flexible to use and appropriate for assisting the expert in decision making process about consumer demand in particular periods.

  15. Mean first passage time of active Brownian particle in one dimension

    NASA Astrophysics Data System (ADS)

    Scacchi, A.; Sharma, A.

    2018-02-01

    We investigate the mean first passage time of an active Brownian particle in one dimension using numerical simulations. The activity in one dimension is modelled as a two state model; the particle moves with a constant propulsion strength but its orientation switches from one state to other as in a random telegraphic process. We study the influence of a finite resetting rate r on the mean first passage time to a fixed target of a single free active Brownian particle and map this result using an effective diffusion process. As in the case of a passive Brownian particle, we can find an optimal resetting rate r* for an active Brownian particle for which the target is found with the minimum average time. In the case of the presence of an external potential, we find good agreement between the theory and numerical simulations using an effective potential approach.

  16. Reproducibility in Natural Language Processing: A Case Study of Two R Libraries for Mining PubMed/MEDLINE.

    PubMed

    Cohen, K Bretonnel; Xia, Jingbo; Roeder, Christophe; Hunter, Lawrence E

    2016-05-01

    There is currently a crisis in science related to highly publicized failures to reproduce large numbers of published studies. The current work proposes, by way of case studies, a methodology for moving the study of reproducibility in computational work to a full stage beyond that of earlier work. Specifically, it presents a case study in attempting to reproduce the reports of two R libraries for doing text mining of the PubMed/MEDLINE repository of scientific publications. The main findings are that a rational paradigm for reproduction of natural language processing papers can be established; the advertised functionality was difficult, but not impossible, to reproduce; and reproducibility studies can produce additional insights into the functioning of the published system. Additionally, the work on reproducibility lead to the production of novel user-centered documentation that has been accessed 260 times since its publication-an average of once a day per library.

  17. Processes Driving Natural Acidification of Western Pacific Coral Reef Waters

    NASA Astrophysics Data System (ADS)

    Shamberger, K. E.; Cohen, A. L.; Golbuu, Y.; McCorkle, D. C.; Lentz, S. J.; Barkley, H. C.

    2013-12-01

    Rising levels of atmospheric carbon dioxide (CO2) are acidifying the oceans, reducing seawater pH, aragonite saturation state (Ωar) and the availability of carbonate ions (CO32-) that calcifying organisms use to build coral reefs. Today's most extensive reef ecosystems are located where open ocean CO32- concentration ([CO32-]) and Ωar exceed 200 μmol kg-1 and 3.3, respectively. However, high rates of biogeochemical cycling and long residence times of water can result in carbonate chemistry conditions within coral reef systems that differ greatly from those of nearby open ocean waters. In the Palauan archipelago, water moving across the reef platform is altered by both biological and hydrographic processes that combine to produce seawater pH, Ωar, [CO32-] significantly lower than that of open ocean source water. Just inshore of the barrier reefs, average Ωar values are 0.2 to 0.3 and pH values are 0.02 to 0.03 lower than they are offshore, declining further as water moves across the back reef, lagoon and into the meandering bays and inlets that characterize the Rock Islands. In the Rock Island bays, coral communities inhabit seawater with average Ωar values of 2.7 or less, and as low as 1.9. Levels of Ωar as low as these are not predicted to occur in the western tropical Pacific open ocean until near the end of the century. Calcification by coral reef organisms is the principal biological process responsible for lowering Ωar and pH, accounting for 68 - 99 % of the difference in Ωar between offshore source water and reef water at our sites. However, in the Rock Island bays where Ωar is lowest, CO2 production by net respiration contributes between 17 - 30 % of the difference in Ωar between offshore source water and reef water. Furthermore, the residence time of seawater in the Rock Island bays is much longer than at the well flushed exposed sites, enabling calcification and respiration to drive Ωar to very low levels despite lower net ecosystem calcification rates in the Rock Island bays than on the barrier reef.

  18. A Generation at Risk: When the Baby Boomers Reach Golden Pond.

    ERIC Educational Resources Information Center

    Butler, Robert N.

    The 20th century has seen average life expectancy in the United States move from under 50 years to over 70 years. Coupled with this increase in average life expectancy is the aging of the 76.4 million persons born between 1946 and 1964. As they approach retirement, these baby-boomers will have to balance their own needs with those of living…

  19. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  20. Comparison of 3-D multi-lag cross- correlation and speckle brightness aberration correction algorithms on static and moving targets.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W

    2009-10-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.

  1. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  2. A case study of energy expenditure based on walking speed reduction during walking upstairs situation at a staircase in FKAAS, UTHM, Johor building

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Ali, M. F. M.; Talib, S. H. A.

    2018-04-01

    Walking velocity is a vector quantity that can be determined by calculating the time taken and displacement of a moving objects. In Malaysia, there are very few researches that were done to determine the walking velocity of citizens to be compared with other countries such as the study about walking upstairs during evacuation process is important when emergency case happen, if there are people in underground garages, they have to walk upstairs for exits and look for shelter and the walking velocity of pedestrian in such cases are necessary to be analysed. Therefore, the objective of this study is to determine the walking speed of pedestrian during walking upstairs situation, finding the relationship between pedestrian walking speed and the characteristics of the pedestrian as well as analysing the energy reduction by comparing the walking speed of pedestrian at the beginning and at the end of staircase. In this case study, an experiment was done to determine the average walking speed of pedestrian. The pedestrian has been selected from different gender, physical character, and age. Based on the data collected, the average normal walking speed of male pedestrian was 1.03 m/s while female was 1.08 m/s. During walking upstairs, the walking speed of pedestrian decreased as the number of floor increased. The average speed for the first stairwell was 0.90 m/s and the number decreased to 0.73 m/s for the second stairwell. From the reduction of speed, the energy used has been calculated and the average kinetic energy used was 1.69 J. Hence, the data collected can be used for further research of staircase design and plan of evacuation process.

  3. Performance Assessment of Two Whole-Lake Acoustic Positional Telemetry Systems - Is Reality Mining of Free-Ranging Aquatic Animals Technologically Possible?

    PubMed Central

    Baktoft, Henrik; Zajicek, Petr; Klefoth, Thomas; Svendsen, Jon C.; Jacobsen, Lene; Pedersen, Martin Wæver; March Morla, David; Skov, Christian; Nakayama, Shinnosuke; Arlinghaus, Robert

    2015-01-01

    Acoustic positional telemetry systems (APTs) represent a novel approach to study the behaviour of free ranging aquatic animals in the wild at unprecedented detail. System manufactures promise remarkably high temporal and spatial resolution. However, the performance of APTs has rarely been rigorously tested at the level of entire ecosystems. Moreover, the effect of habitat structure on system performance has only been poorly documented. Two APTs were deployed to cover two small lakes and a series of standardized stationary tests were conducted to assess system performance. Furthermore, a number of tow tests were conducted to simulate moving fish. Based on these data, we quantified system performance in terms of data yield, accuracy and precision as a function of structural complexity in relation to vegetation. Mean data yield of the two systems was 40 % (Lake1) and 60 % (Lake2). Average system accuracy (acc) and precision (prec) were Lake1: acc = 3.1 m, prec = 1.1 m; Lake2: acc = 1.0 m, prec = 0.2 m. System performance was negatively affected by structural complexity, i.e., open water habitats yielded far better performance than structurally complex vegetated habitats. Post-processing greatly improved data quality, and sub-meter accuracy and precision were, on average, regularly achieved in Lake2 but remained the exception in the larger and structurally more complex Lake1. Moving transmitters were tracked well by both systems. Whereas overestimation of moved distance is inevitable for stationary transmitters due to accumulation of small tracking errors, moving transmitters can result in both over- and underestimation of distances depending on circumstances. Both deployed APTs were capable of providing high resolution positional data at the scale of entire lakes and are suitable systems to mine the reality of free ranging fish in their natural environment. This opens important opportunities to advance several fields of study such as movement ecology and animal social networks in the wild. It is recommended that thorough performance tests are conducted in any study utilizing APTs. The APTs tested here appear best suited for studies in structurally simple ecosystems or for studying pelagic species. In such situations, the data quality provided by the APTs is exceptionally high. PMID:26000459

  4. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    PubMed Central

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain. PMID:22046178

  5. Exponentially Weighted Moving Average Change Detection Around the Country (and the World)

    NASA Astrophysics Data System (ADS)

    Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.

    2014-12-01

    With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.

  6. Lateral information processing by spiking neurons: a theoretical model of the neural correlate of consciousness.

    PubMed

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain.

  7. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers

    USGS Publications Warehouse

    Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.

    2011-01-01

    Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio

  8. Perceptual integration of motion and form information: evidence of parallel-continuous processing.

    PubMed

    von Mühlenen, A; Müller, H J

    2000-04-01

    In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).

  9. Method and apparatus for a combination moving bed thermal treatment reactor and moving bed filter

    DOEpatents

    Badger, Phillip C.; Dunn, Jr., Kenneth J.

    2015-09-01

    A moving bed gasification/thermal treatment reactor includes a geometry in which moving bed reactor particles serve as both a moving bed filter and a heat carrier to provide thermal energy for thermal treatment reactions, such that the moving bed filter and the heat carrier are one and the same to remove solid particulates or droplets generated by thermal treatment processes or injected into the moving bed filter from other sources.

  10. Hybrid Stochastic Forecasting Model for Management of Large Open Water Reservoir with Storage Function

    NASA Astrophysics Data System (ADS)

    Kozel, Tomas; Stary, Milos

    2017-12-01

    The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.

  11. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Effect of Rolling Massage on Particle Moving Behaviour in Blood Vessels

    NASA Astrophysics Data System (ADS)

    Yi, Hou-Hui; Fan, Li-Juan; Yang, Xiao-Feng; Chen, Yan-Yan

    2008-09-01

    The rolling massage manipulation is a classic Chinese massage, which is expected to eliminate many diseases. Here the effect of the rolling massage on the particle moving property in the blood vessels under the rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulation results show that the particle moving behaviour depends on the rolling velocity, the distance between particle position and rolling position. The average values, including particle translational velocity and angular velocity, increase as the rolling velocity increases almost linearly. The result is helpful to understand the mechanism of the massage and develop the rolling techniques.

  12. Experimental comparisons of hypothesis test and moving average based combustion phase controllers.

    PubMed

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2016-11-01

    For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Neonatal heart rate prediction.

    PubMed

    Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth

    2009-01-01

    Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.

  14. Structural equation modeling of the inflammatory response to traffic air pollution

    PubMed Central

    Baja, Emmanuel S.; Schwartz, Joel D.; Coull, Brent A.; Wellenius, Gregory A.; Vokonas, Pantel S.; Suh, Helen H.

    2015-01-01

    Several epidemiological studies have reported conflicting results on the effect of traffic-related pollutants on markers of inflammation. In a Bayesian framework, we examined the effect of traffic pollution on inflammation using structural equation models (SEMs). We studied measurements of C-reactive protein (CRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble intracellular adhesion molecule-1 (sICAM-1) for 749 elderly men from the Normative Aging Study. Using repeated measures SEMs, we fit a latent variable for traffic pollution that is reflected by levels of black carbon, carbon monoxide, nitrogen monoxide and nitrogen dioxide to estimate its effect on a latent variable for inflammation that included sICAM-1, sVCAM-1 and CRP. Exposure periods were assessed using 1-, 2-, 3-, 7-, 14- and 30-day moving averages previsit. We compared our findings using SEMs with those obtained using linear mixed models. Traffic pollution was related to increased inflammation for 3-, 7-, 14- and 30-day exposure periods. An inter-quartile range increase in traffic pollution was associated with a 2.3% (95% posterior interval (PI): 0.0–4.7%) increase in inflammation for the 3-day moving average, with the most significant association observed for the 30-day moving average (23.9%; 95% PI: 13.9–36.7%). Traffic pollution adversely impacts inflammation in the elderly. SEMs in a Bayesian framework can comprehensively incorporate multiple pollutants and health outcomes simultaneously in air pollution–cardiovascular epidemiological studies. PMID:23232970

  15. Learning curves for single incision and conventional laparoscopic right hemicolectomy: a multidimensional analysis.

    PubMed

    Park, Yoonah; Yong, Yuen Geng; Yun, Seong Hyeon; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung

    2015-05-01

    This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase.

  16. An Estimate of the Likelihood for a Climatically Significant Volcanic Eruption Within the Present Decade (2000-2009)

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)

    2000-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.

  17. Job Surfing: Move On to Move Up.

    ERIC Educational Resources Information Center

    Martin, Justin

    1997-01-01

    Looks at the process of switching jobs and changing careers. Discusses when to consider options and make the move as well as the need to be flexible and open minded. Provides a test for determining the chances of promotion and when to move on. (JOW)

  18. Microvolt T-Wave Alternans

    PubMed Central

    Verrier, Richard L.; Klingenheben, Thomas; Malik, Marek; El-Sherif, Nabil; Exner, Derek V.; Hohnloser, Stefan H.; Ikeda, Takanori; Martínez, Juan Pablo; Narayan, Sanjiv M.; Nieminen, Tuomo; Rosenbaum, David S.

    2014-01-01

    This consensus guideline was prepared on behalf of the International Society for Holter and Noninvasive Electrocardiology and is cosponsored by the Japanese Circulation Society, the Computers in Cardiology Working Group on e-Cardiology of the European Society of Cardiology, and the European Cardiac Arrhythmia Society. It discusses the electrocardiographic phenomenon of T-wave alternans (TWA) (i.e., a beat-to-beat alternation in the morphology and amplitude of the ST- segment or T-wave). This statement focuses on its physiological basis and measurement technologies and its clinical utility in stratifying risk for life-threatening ventricular arrhythmias. Signal processing techniques including the frequency-domain Spectral Method and the time-domain Modified Moving Average method have demonstrated the utility of TWA in arrhythmia risk stratification in prospective studies in >12,000 patients. The majority of exercise-based studies using both methods have reported high relative risks for cardiovascular mortality and for sudden cardiac death in patients with preserved as well as depressed left ventricular ejection fraction. Studies with ambulatory electrocardiogram-based TWA analysis with Modified Moving Average method have yielded significant predictive capacity. However, negative studies with the Spectral Method have also appeared, including 2 interventional studies in patients with implantable defibrillators. Meta-analyses have been performed to gain insights into this issue. Frontiers of TWA research include use in arrhythmia risk stratification of individuals with preserved ejection fraction, improvements in predictivity with quantitative analysis, and utility in guiding medical as well as device-based therapy. Overall, although TWA appears to be a useful marker of risk for arrhythmic and cardiovascular death, there is as yet no definitive evidence that it can guide therapy. PMID:21920259

  19. Substorm-related plasma sheet motions as determined from differential timing of plasma changes at the ISEE satellites

    NASA Technical Reports Server (NTRS)

    Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Asbridge, J. R.; Paschmann, G.; Sckopke, N.; Russell, C. T.

    1981-01-01

    From an ISEE survey of substorm dropouts and recoveries during the period February 5 to May 25, 1978, 66 timing events observed by the Los Alamos Scientific Laboratory/Max-Planck-Institut Fast Plasma Experiments were studied in detail. Near substorm onset, both the average timing velocity and the bulk flow velocity at the edge of the plasma sheet are inward, toward the center. Measured normal to the surface of the plasma sheet, the timing velocity is 23 + or - 18 km/s and the proton flow velocity is 20 + or - 8 km/s. During substorm recovery, the plasma sheet reappears moving outward with an average timing velocity of 133 + or - 31 km/s; however, the corresponding proton flow velocity is only 3 + or - 7 km/s in the same direction. It is suggested that the difference between the average timing velocity for the expansion of the plasma sheet and the plasma bulk flow perpendicular to the surface of the sheet during substorm recovery is most likely the result of surface waves moving past the position of the satellites.

  20. The vacuum friction paradox and related puzzles

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Sonnleitner, Matthias

    2018-04-01

    The frequency of light emitted by a moving source is shifted by a factor proportional to its velocity. We find that this Doppler shift requires the existence of a paradoxical effect: that a moving atom radiating in otherwise empty space feels a net or average force acing against its direction motion and proportional in magnitude to is speed. Yet there is no preferred rest frame, either in relativity or in Newtonian mechanics, so how can there be a vacuum friction force?

  1. Hydrogeology and leachate movement near two chemical-waste sites in Oswego County, New York

    USGS Publications Warehouse

    Anderson, H.R.; Miller, Todd S.

    1986-01-01

    Forty-five observation wells and test holes were installed at two chemical waste disposal sites in Oswego County, New York, to evaluate the hydrogeologic conditions and the rate and direction of leachate migration. At the site near Oswego groundwater moves northward at an average velocity of 0.4 ft/day through unconsolidated glacial deposits and discharges into White Creek and Wine Creek, which border the site and discharge to Lake Ontario. Leaking barrels by chemical wastes have contaminated the groundwater within the site, as evidenced by detection of 10 ' priority pollutant ' organic compounds, and elevated values of specific conductance, chloride, arsenic, lead, and mercury. At the site near Fulton, where 8,000 barrels of chemical wastes are buried, groundwater in the sandy surficial aquifer bordering the landfill on the south and east moves southward and eastward at an average velocity of 2.8 ft/day and discharges to Bell Creek, which discharges to the Oswego River, or moves beneath the landfill. Leachate is migrating eastward, southeastward, and southwestward, as evidenced by elevated values of specific conductance, temperature, and concentrations of several trace metals at wells east, southeast, and southwest of the site. (USGS)

  2. The Accuracy of Talking Pedometers when Used during Free-Living: A Comparison of Four Devices

    ERIC Educational Resources Information Center

    Albright, Carolyn; Jerome, Gerald J.

    2011-01-01

    The purpose of this study was to determine the accuracy of four commercially available talking pedometers in measuring accumulated daily steps of adult participants while they moved independently. Ten young sighted adults (with an average age of 24.1 [plus or minus] 4.6 years), 10 older sighted adults (with an average age of 73 [plus or minus] 5.5…

  3. Comparison of estimators for rolling samples using Forest Inventory and Analysis data

    Treesearch

    Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski

    2003-01-01

    The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...

  4. High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.

    PubMed

    Haxton, Thomas K

    2015-03-10

    We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.

  5. Pulsation Detection from Noisy Ultrasound-Echo Moving Images of Newborn Baby Head Using Fourier Transform

    NASA Astrophysics Data System (ADS)

    Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi

    1995-05-01

    In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.

  6. HELIOSHEATH MAGNETIC FIELDS BETWEEN 104 AND 113 AU IN A REGION OF DECLINING SPEEDS AND A STAGNATION REGION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burlaga, L. F.; Ness, N. F., E-mail: lburlagahsp@verizon.net, E-mail: nfnudel@yahoo.com

    2012-04-10

    We examine the relationships between the magnetic field and the radial velocity component V{sub R} observed in the heliosheath by instruments on Voyager 1 (V1). No increase in the magnetic field strength B was observed in a region where V{sub R} decreased linearly from 70 km s{sup -1} to 0 km s{sup -1} as plasma moved outward past V1. An unusually broad transition from positive to negative polarity was observed during a Almost-Equal-To 26 day interval when the heliospheric current sheet (HCS) moved below the latitude of V1 and the speed of V1 was comparable to the radial speed ofmore » the heliosheath flow. When V1 moved through a region where V{sub R} Almost-Equal-To 0 (the 'stagnation region'), B increased linearly with time by a factor of two, and the average of B was 0.14 nT. Nothing comparable to this was observed previously. The magnetic polarity was negative throughout the stagnation region for Almost-Equal-To 580 days until 2011 DOY 235, indicating that the HCS was below the latitude of V1. The average passage times of the magnetic holes and proton boundary layers were the same during 2009 and 2011, because the plasma moved past V1 during 2009 at the same speed that V1 moved through the stagnation region during 2011. The microscale fluctuations of B in the stagnation region during 2011 are qualitatively the same as those observed in the heliosheath during 2009. These results suggest that the stagnation region is a part of the heliosheath, rather than a 'transition region' associated with the heliopause.« less

  7. Moving Hands, Moving Entities

    ERIC Educational Resources Information Center

    Setti, Annalisa; Borghi, Anna M.; Tessari, Alessia

    2009-01-01

    In this study we investigated with a priming paradigm whether uni and bimanual actions presented as primes differently affected language processing. Animals' (self-moving entities) and plants' (not self-moving entities) names were used as targets. As prime we used grasping hands, presented both as static images and videos. The results showed an…

  8. Fast shuttling of a particle under weak spring-constant noise of the moving trap

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Jing; Ruschhaupt, A.; Muga, J. G.

    2018-05-01

    We investigate the excitation of a quantum particle shuttled in a harmonic trap with weak spring-constant colored noise. The Ornstein-Uhlenbeck model for the noise correlation function describes a wide range of possible noises, in particular for short correlation times the white-noise limit examined by Lu et al. [Phys. Rev. A 89, 063414 (2014)], 10.1103/PhysRevA.89.063414 and, by averaging over correlation times, "1 /f flicker noise." We find expressions for the excitation energy in terms of static (independent of trap motion) and dynamical sensitivities, with opposite behavior with respect to shuttling time, and demonstrate that the excitation can be reduced by proper process timing and design of the trap trajectory.

  9. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  10. On-line algorithms for forecasting hourly loads of an electric utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vemuri, S.; Huang, W.L.; Nelson, D.J.

    A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less

  11. MOVES sensitivity study

    DOT National Transportation Integrated Search

    2012-01-01

    Purpose: : To determine ranking of important parameters and the overall sensitivity to values of variables in MOVES : To allow a greater understanding of the MOVES modeling process for users : Continued support by FHWA to transportation modeling comm...

  12. Holistic processing of static and moving faces.

    PubMed

    Zhao, Mintao; Bülthoff, Isabelle

    2017-07-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. MOVES2014: Evaporative Emissions Report

    EPA Science Inventory

    Vehicle evaporative emissions are now modeled in EPA’s MOVES according to physical processes, permeation, tank vapor venting, liquid leaks, and refueling emissions. With this update, the following improvements are being incorporated into MOVES evaporative emissions methodology, a...

  14. Intelligent transportation systems infrastructure initiative

    DOT National Transportation Integrated Search

    1997-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  15. Comparing the performance of FA, DFA and DMA using different synthetic long-range correlated time series

    PubMed Central

    Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier

    2012-01-01

    Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785

  16. CLASSICAL AREAS OF PHENOMENOLOGY: Lattice Boltzmann simulation of behaviour of particles moving in blood vessels under the rolling massage

    NASA Astrophysics Data System (ADS)

    Yi, Hou-Hui; Yang, Xiao-Feng; Wang, Cai-Feng; Li, Hua-Bing

    2009-07-01

    The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques.

  17. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  18. Direct determination approach for the multifractal detrending moving average analysis

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing

    2017-11-01

    In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.

  19. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  20. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  1. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  2. Finite-size effect and the components of multifractality in transport economics volatility based on multifractal detrending moving average method

    NASA Astrophysics Data System (ADS)

    Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia

    2016-11-01

    Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.

  3. Micromechanical Characterization of Polysilicon Films through On-Chip Tests

    PubMed Central

    Mirzazadeh, Ramin; Eftekhar Azam, Saeed; Mariani, Stefano

    2016-01-01

    When the dimensions of polycrystalline structures become comparable to the average grain size, some reliability issues can be reported for the moving parts of inertial microelectromechanical systems (MEMS). Not only the overall behavior of the device turns out to be affected by a large scattering, but also the sensitivity to imperfections gets enhanced. In this work, through on-chip tests, we experimentally investigate the behavior of thin polysilicon samples using standard electrostatic actuation/sensing. The discrepancy between the target and actual responses of each sample has then been exploited to identify: (i) the overall stiffness of the film and, according to standard continuum elasticity, a morphology-based value of its Young’s modulus; (ii) the relevant over-etch induced by the fabrication process. To properly account for the aforementioned stochastic features at the micro-scale, the identification procedure has been based on particle filtering. A simple analytical reduced-order model of the moving structure has been also developed to account for the nonlinearities in the electrical field, up to pull-in. Results are reported for a set of ten film samples of constant slenderness, and the effects of different actuation mechanisms on the identified micromechanical features are thoroughly discussed. PMID:27483268

  4. Micromechanical Characterization of Polysilicon Films through On-Chip Tests.

    PubMed

    Mirzazadeh, Ramin; Eftekhar Azam, Saeed; Mariani, Stefano

    2016-07-28

    When the dimensions of polycrystalline structures become comparable to the average grain size, some reliability issues can be reported for the moving parts of inertial microelectromechanical systems (MEMS). Not only the overall behavior of the device turns out to be affected by a large scattering, but also the sensitivity to imperfections gets enhanced. In this work, through on-chip tests, we experimentally investigate the behavior of thin polysilicon samples using standard electrostatic actuation/sensing. The discrepancy between the target and actual responses of each sample has then been exploited to identify: (i) the overall stiffness of the film and, according to standard continuum elasticity, a morphology-based value of its Young's modulus; (ii) the relevant over-etch induced by the fabrication process. To properly account for the aforementioned stochastic features at the micro-scale, the identification procedure has been based on particle filtering. A simple analytical reduced-order model of the moving structure has been also developed to account for the nonlinearities in the electrical field, up to pull-in. Results are reported for a set of ten film samples of constant slenderness, and the effects of different actuation mechanisms on the identified micromechanical features are thoroughly discussed.

  5. A simple and effective process for noise reduction of multichannel cortical field potential recordings in freely moving rats.

    PubMed

    Shaw, Fu-Zen; Yen, Chen Tung; Chen, Ruei Feng

    2003-04-15

    Simple and useful steps, i.e. placing a grounded plate under the recording chamber as well as using multiple reference electrodes, are introduced here for obtaining reliable low-noise recordings of brain activity in freely moving rats. A general circuit model was built to analyze the electrical interference of both single-grounded and two-reference ground-free recording configurations. In both simulated and realistic conditions under two recording states, 60-Hz magnitude was in the microvolt range. Moreover, the noise was significantly reduced by shortening the distance between the subject and the grounded plate under the recording chamber. Furthermore, in chronically implanted rats, average 60-Hz interference of multichannel electroencephalograms of two-reference ground-free recordings (3.74 +/- 0.18 microV) was significantly smaller than that of the single-grounded condition (9.03 +/- 1.98 microV). Thus, we demonstrated that a lower-noise recording can be achieved by a two-reference configuration and a closely-placed metal grounded plate in an open-field circumstance. As compared to the use of a Faraday cage, this simple procedure is of benefit for long-term behavioral tracking with a video camera and for pharmacological experiments.

  6. Backing up and Moving forward in Fractional Understanding

    ERIC Educational Resources Information Center

    Barlow, Angela T.; Lischka, Alyson E.; Willingham, James C.; Hartland, Kristin S.

    2017-01-01

    This article describes a process called "Backing Up" which is a way to preassess student understanding of a topic and gauge student readiness to move forward in the learning process. This process of backing up begins with using responses to a word problem to identify categories of students' understandings in relation to the expectations…

  7. Long-term follow-up after maxillary distraction osteogenesis in growing children with cleft lip and palate.

    PubMed

    Huang, Chiung-Shing; Harikrishnan, Pandurangan; Liao, Yu-Fang; Ko, Ellen W C; Liou, Eric J W; Chen, Philip K T

    2007-05-01

    To evaluate the changes in maxillary position after maxillary distraction osteogenesis in six growing children with cleft lip and palate. Retrospective, longitudinal study on maxillary changes at A point, anterior nasal spine, posterior nasal spine, central incisor, and first molar. The University Hospital Craniofacial Center. Cephalometric radiographs were used to measure the maxillary position immediately after distraction, at 6 months, and more than 1 year after distraction. After maxillary distraction with a rigid external distraction device, the maxilla (A point) on average moved forward 9.7 mm and downward 3.5 mm immediately after distraction, moved backward 0.9 mm and upward 2.0 mm after 6 months postoperatively, and then moved further backward 2.3 mm and downward 6.8 mm after more than 1 year from the predistraction position. In most cases, maxilla moved forward at distraction and started to move backward until 1 year after distraction, but remained forward, as compared with predistraction position. Maxilla also moved downward during distraction and upward in 6 months, but started descending in 1 year. There also was no further forward growth of the maxilla after distraction in growing children with clefts.

  8. Modelling and simulation of a moving interface problem: freeze drying of black tea extract

    NASA Astrophysics Data System (ADS)

    Aydin, Ebubekir Sıddık; Yucel, Ozgun; Sadikoglu, Hasan

    2017-06-01

    The moving interface separates the material that is subjected to the freeze drying process as dried and frozen. Therefore, the accurate modeling the moving interface reduces the process time and energy consumption by improving the heat and mass transfer predictions during the process. To describe the dynamic behavior of the drying stages of the freeze-drying, a case study of brewed black tea extract in storage trays including moving interface was modeled that the heat and mass transfer equations were solved using orthogonal collocation method based on Jacobian polynomial approximation. Transport parameters and physical properties describing the freeze drying of black tea extract were evaluated by fitting the experimental data using Levenberg-Marquardt algorithm. Experimental results showed good agreement with the theoretical predictions.

  9. Spatial patterns of erosion in a bedrock gorge

    NASA Astrophysics Data System (ADS)

    Beer, Alexander. R.; Turowski, Jens M.; Kirchner, James W.

    2017-01-01

    Understanding the physical processes driving bedrock channel formation is essential for interpreting and predicting the evolution of mountain landscapes. Here we analyze bedrock erosion patterns measured at unprecedented spatial resolution (mm) over 2 years in a natural bedrock gorge. These spatial patterns show that local bedrock erosion rates depend on position in the channel cross section, height above the streambed, and orientation relative to the main streamflow and sediment path. These observations are consistent with the expected spatial distribution of impacting particles (the tools effect) and shielding by sediment on the bed (the cover effect). Vertical incision by bedrock abrasion averaged 1.5 mm/a, lateral abrasion averaged 0.4 mm/a, and downstream directed abrasion of flow obstacles averaged 2.6 mm/a. However, a single plucking event locally exceeded these rates by orders of magnitude (˜100 mm/a), and accounted for one third of the eroded volume in the studied gorge section over the 2 year study period. Hence, if plucking is spatially more frequent than we observed in this study period, it may contribute substantially to long-term erosion rates, even in the relatively massive bedrock at our study site. Our observations demonstrate the importance of bedrock channel morphology and the spatial distribution of moving and static sediment in determining local erosion rates.

  10. Proceedings of the Annual Conference on Manual Control (18th) Held at Dayton, Ohio on 8-10 June 1982

    DTIC Science & Technology

    1983-01-01

    frequency of the disturbance the probability to cross the borderline becomes larger, and corrective action (moving average value further away-,_. from the...pupillometer. The prototypical data was the average of 10 records from 5 normal subjects who showed similar responses. The different amplitudes of light...following orders touch, position, temperature , and vain. Our subjects sometimes reported numbness in the fingertips, dulled pinprick sensations

  11. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  12. Generalized seasonal autoregressive integrated moving average models for count data with application to malaria time series with low case numbers.

    PubMed

    Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope

    2013-01-01

    With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.

  13. Video-Assisted Thoracic Surgical Lobectomy for Lung Cancer: Description of a Learning Curve.

    PubMed

    Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Cao, Shiqi; Cao, Yongke

    2017-07-01

    Video-assisted thoracic surgical (VATS) lobectomy is gaining popularity in the treatment of lung cancer. The aim of this study is to investigate the learning curve of VATS lobectomy by using multidimensional methods and to compare the learning curve groups with respect to perioperative clinical outcomes. We retrospectively reviewed a prospective database to identify 67 consecutive patients who underwent VATS lobectomy for lung cancer by a single surgeon. The learning curve was analyzed by using moving average and the cumulative sum (CUSUM) method. With the moving average and CUSUM analyses for the operation time, patients were stratified into two groups, with chronological order defining early and late experiences. Perioperative clinical outcomes were compared between the two learning curve groups. According to the moving average method, the peak point for operation time occurred at the 26th case. The CUSUM method also showed the operation time peak point at the 26th case. When results were compared between early- and late-experience periods, the operation time, duration of chest drainage, and postoperative hospital stay were significantly longer in the early-experience group (cases 1 to 26). The intraoperative estimated blood loss was significantly less in the late-experience group (cases 27 to 67). CUSUM charts showed a decreasing duration of chest drainage after the 36th case and shortening postoperative hospital stay after the 37th case. Multidimensional statistical analyses suggested that the learning curve for VATS lobectomy for lung cancer required ∼26 cases. Favorable intraoperative and postoperative care parameters for VATS lobectomy were observed in the late-experience group.

  14. Learning curves for single incision and conventional laparoscopic right hemicolectomy: a multidimensional analysis

    PubMed Central

    Park, Yoonah; Yong, Yuen Geng; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung

    2015-01-01

    Purpose This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). Methods This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Results Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). Conclusion The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase. PMID:25960990

  15. Generalized Seasonal Autoregressive Integrated Moving Average Models for Count Data with Application to Malaria Time Series with Low Case Numbers

    PubMed Central

    Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

    2013-01-01

    Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448

  16. The influence of plasma flows bringing the magnetotail back to a more symmetric configuration

    NASA Astrophysics Data System (ADS)

    Reistad, J. P.; Østgaard, N.; Laundal, K.; Tenfjord, P.; Snekvik, K.; Haaland, S.; Milan, S. E.; Ohma, A.; Grocott, A.; Oksavik, K.

    2017-12-01

    Research from the past decades, most importantly conjugate studies, have shown extensive evidence of the Earth's closed magnetotail being highly displaced from the quiet-day configuration in response to the IMF interacting with the magnetosphere. By displaced we here refer to the mapping of magnetic field-lines from one hemisphere to the other. The large-scale ionospheric convection related to such displaced closed field-lines has also been studied, showing that the footprint in one hemisphere tend to move faster to reduce the displacement, a process we refer to as the restoring of symmetry. Although the appearance and occurrence of the plasma flows related to the restoring of symmetry has been shown to have a strong Interplanetary Magnetic Field (IMF) control, its dynamics and relation to internal magnetospheric processes are unknown. Using multiple years of line-of-sight measurements of the ionospheric plasma convection from the Super Dual Auroral Radar Network binned by IMF, season, and SML index, we have found that the restoring symmetry flows dominate the average convection pattern in the nightside ionosphere during low levels of magnetotail activity, quantified by the SML index. For increasing magnetotail activity, signatures of the restoring symmetry process become less and less pronounced in the global average convection maps. This effect is seen for all clock angles away from IMF By = 0. These results are relevant in order to better understand the dynamic evolution of flux-tubes in the asymmetric magnetosphere.

  17. Circular SAR GMTI

    NASA Astrophysics Data System (ADS)

    Page, Douglas; Owirka, Gregory; Nichols, Howard; Scarborough, Steven

    2014-06-01

    We describe techniques for improving ground moving target indication (GMTI) performance in multi-channel synthetic aperture radar (SAR) systems. Our approach employs a combination of moving reference processing (MRP) to compensate for defocus of moving target SAR responses and space-time adaptive processing (STAP) to mitigate the effects of strong clutter interference. Using simulated moving target and clutter returns, we demonstrate focusing of the target return using MRP, and discuss the effect of MRP on the clutter response. We also describe formation of adaptive degrees of freedom (DOFs) for STAP filtering of MRP processed data. For the simulated moving target in clutter example, we demonstrate improvement in the signal to interference plus noise (SINR) loss compared to more standard algorithm configurations. In addition to MRP and STAP, the use of tracker feedback, false alarm mitigation, and parameter estimation techniques are also described. A change detection approach for reducing false alarms from clutter discretes is outlined, and processing of a measured data coherent processing interval (CPI) from a continuously orbiting platform is described. The results demonstrate detection and geolocation of a high-value target under track. The endoclutter target is not clearly visible in single-channel SAR chips centered on the GMTI track prediction. Detections are compared to truth data before and after geolocation using measured angle of arrival (AOA).

  18. Estimating Perturbation and Meta-Stability in the Daily Attendance Rates of Six Small High Schools

    NASA Astrophysics Data System (ADS)

    Koopmans, Matthijs

    This paper discusses the daily attendance rates in six small high schools over a ten-year period and evaluates how stable those rates are. “Stability” is approached from two vantage points: pulse models are fitted to estimate the impact of sudden perturbations and their reverberation through the series, and Autoregressive Fractionally Integrated Moving Average (ARFIMA) techniques are used to detect dependencies over the long range of the series. The analyses are meant to (1) exemplify the utility of time series approaches in educational research, which lacks a time series tradition, (2) discuss some time series features that seem to be particular to daily attendance rate trajectories such as the distinct downward pull coming from extreme observations, and (3) present an analytical approach to handle the important yet distinct patterns of variability that can be found in these data. The analysis also illustrates why the assumption of stability that underlies the habitual reporting of weekly, monthly and yearly averages in the educational literature is questionable, as it reveals dynamical processes (perturbation, meta-stability) that remain hidden in such summaries.

  19. Girls Thrive Emotionally, Boys Falter After Move to Better Neighborhood

    MedlinePlus

    ... averaging 34 percent, compared to 50 percent for control group families. Mental illness is more prevalent among youth ... compared to 3.5 percent among boys in control group families who did not receive vouchers. Rates of ...

  20. Rippling Dune Front in Herschel Crater on Mars

    NASA Image and Video Library

    2011-11-17

    A rippled dune front in Herschel Crater on Mars moved an average of about two meters about two yards between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.

  1. Rippling Dune Front in Herschel Crater on Mars

    NASA Image and Video Library

    2011-11-17

    A rippled dune front in Herschel Crater on Mars moved an average of about one meter about one yard between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.

  2. Shifting Sand in Herschel Crater

    NASA Image and Video Library

    2011-11-17

    The eastern margin of a rippled dune in Herschel Crater on Mars moved an average distance of three meters about three yards between March 3, 2007 and December 1, 2010, in one of two images taken by NASA Mars Reconnaissance Orbiter.

  3. A case study on pseudo 3-D Chirp sub-bottom profiler (SBP) survey for the detection of a fault trace in shallow sedimentary layers at gas hydrate site in the Ulleung Basin, East Sea

    NASA Astrophysics Data System (ADS)

    Kim, Young-Jun; Koo, Nam-Hyung; Cheong, Snons; Kim, Jung-Ki; Chun, Jong-Hwa; Shin, Sung-Ryul; Riedel, Michael; Lee, Ho-Young

    2016-10-01

    A pseudo 3-D Chirp sub-bottom profiler (SBP) survey was conducted to define the extension of a fault that was previously identified on low-resolution 2-D seismic data with an emphasis on the shallow sedimentary layers and to determine if the fault extends to the seafloor. The geophysical survey was conducted as part of an environmental impact assessment for a proposed gas hydrate production test in the Ulleung Basin, East Sea. The Chirp SBP raw data were acquired over an area of 1 km × 1 km with an average line spacing of 20 m. To produce a 3-D Chirp SBP volume, we developed an optimal processing sequence that was divided into two steps. The first phase of 2-D data processing included a sweep signature estimation, correlation, deconvolution, swell effect correction, and migration. The second phase of 3-D data processing was composed of a bin design, bin gathering of the final processed 2-D data set, amplitude normalization, and residual statics correction. The 3-D Chirp SBP volume provides enhanced imaging especially due to the residual static processing using a moving average method and shows better continuity of the sedimentary layers and consistency of the reflection events than the individual 2-D lines. Deformation of the seafloor as a result of the fault was detected, and the fault offset increases in the deeper sedimentary layers. We also determined that the fault strikes northwest-southeast. However, the shallow sub-seafloor sediments have high porosities and therefore do not exhibit brittle fault-behavior but rather deform continuously due to fault movement.

  4. Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data

    PubMed Central

    Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha

    2016-01-01

    Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059

  5. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  6. Computer simulation of concentrated solid solution strengthening

    NASA Technical Reports Server (NTRS)

    Kuo, C. T. K.; Arsenault, R. J.

    1976-01-01

    The interaction forces between a straight edge dislocation moving through a three-dimensional block containing a random array of solute atoms were determined. The yield stress at 0 K was obtained by determining the average maximum solute-dislocation interaction force that is encountered by edge dislocation, and an expression relating the yield stress to the length of the dislocation and the solute concentration is provided. The magnitude of the solid solution strengthening due to solute atoms can be determined directly from the numerical results, provided the dislocation line length that moves as a unit is specified.

  7. Methylmercury production in and export from agricultural wetlands in California, USA: the need to account for physical transport processes into and out of the root zone

    USGS Publications Warehouse

    Bachand, Philip A.M.; Bachand, Sandra M.; Fleck, Jacob A.; Alpers, Charles N.; Stephenson, Mark; Windham-Myers, Lisamarie

    2014-01-01

    Concentration and mass balance analyses were used to quantify methylmercury (MeHg) loads from conventional (white) rice, wild rice, and fallowed fields in northern California's Yolo Bypass. These analyses were standardized against chloride to distinguish transport pathways and net ecosystem production (NEP). During summer, chloride loads were both exported with surface water and moved into the root zone at a 2:1 ratio. MeHg and dissolved organic carbon (DOC) behaved similarly with surface water and root zone exports at ~ 3:1 ratio. These trends reversed in winter with DOC, MeHg, and chloride moving from the root zone to surface waters at rates opposite and exceeding summertime root zone fluxes. These trends suggest that summer transpiration advectively moves constituents from surface water into the root zone, and winter diffusion, driven by concentration gradients, subsequently releases those constituents into surface waters. The results challenge a number of paradigms regarding MeHg. Specifically, biogeochemical conditions favoring microbial MeHg production do not necessarily translate to synchronous surface water exports; MeHg may be preserved in the soils allowing for release at a later time; and plants play a role in both biogeochemistry and transport. Our calculations show that NEP of MeHg occurred during both summer irrigation and winter flooding. Wild rice wet harvesting and winter flooding of white rice fields were specific practices that increased MeHg export, both presumably related to increased labile organic carbon and disturbance. Outflow management during these times could reduce MeHg exports. Standardizing MeHg outflow:inflow concentration ratios against natural tracers (e.g. chloride, EC) provides a simple tool to identify NEP periods. Summer MeHg exports averaged 0.2 to 1 μg m− 2 for the different agricultural wetland fields, depending upon flood duration. Average winter MeHg exports were estimated at 0.3 μg m− 2. These exports are within the range reported for other shallow aquatic systems.

  8. Methylmercury production in and export from agricultural wetlands in California, USA: the need to account for physical transport processes into and out of the root zone.

    PubMed

    Bachand, P A M; Bachand, S M; Fleck, J A; Alpers, C N; Stephenson, M; Windham-Myers, L

    2014-02-15

    Concentration and mass balance analyses were used to quantify methylmercury (MeHg) loads from conventional (white) rice, wild rice, and fallowed fields in northern California's Yolo Bypass. These analyses were standardized against chloride to distinguish transport pathways and net ecosystem production (NEP). During summer, chloride loads were both exported with surface water and moved into the root zone at a 2:1 ratio. MeHg and dissolved organic carbon (DOC) behaved similarly with surface water and root zone exports at ~3:1 ratio. These trends reversed in winter with DOC, MeHg, and chloride moving from the root zone to surface waters at rates opposite and exceeding summertime root zone fluxes. These trends suggest that summer transpiration advectively moves constituents from surface water into the root zone, and winter diffusion, driven by concentration gradients, subsequently releases those constituents into surface waters. The results challenge a number of paradigms regarding MeHg. Specifically, biogeochemical conditions favoring microbial MeHg production do not necessarily translate to synchronous surface water exports; MeHg may be preserved in the soils allowing for release at a later time; and plants play a role in both biogeochemistry and transport. Our calculations show that NEP of MeHg occurred during both summer irrigation and winter flooding. Wild rice wet harvesting and winter flooding of white rice fields were specific practices that increased MeHg export, both presumably related to increased labile organic carbon and disturbance. Outflow management during these times could reduce MeHg exports. Standardizing MeHg outflow:inflow concentration ratios against natural tracers (e.g. chloride, EC) provides a simple tool to identify NEP periods. Summer MeHg exports averaged 0.2 to 1 μg m(-2) for the different agricultural wetland fields, depending upon flood duration. Average winter MeHg exports were estimated at 0.3 μg m(-2). These exports are within the range reported for other shallow aquatic systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  10. Multiscale Granger causality

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele

    2017-10-01

    In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.

  11. Managing the construction bidding process : a move to simpler construction plan sets

    DOT National Transportation Integrated Search

    2001-01-31

    This project was conducted to determine whether construction plan sets could be significantly simplified to speed the process of moving projects to construction. The work steps included a literature review, a telephone survey of highway agencies in s...

  12. Modern Sorters for Soil Segregation on Large Scale Remediation Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shonka, J.J.; Kelley, J.E.; O'Brien, J.M.

    2008-01-15

    In the mid-1940's, Dr. C. Lapointe developed a Geiger tube based uranium ore scanner and picker to replace hand-cobbing. In the 1990's, a modern version of the Lapointe Picker for soil sorting was developed around the need to clean the Johnston Atoll of plutonium. It worked well with sand, but these systems are ineffective with soil, especially with wet conditions. Additionally, several other constraints limited throughput. Slow moving belts and thin layers of material on the belt coupled with the use of multiple small detectors and small sorting gates make these systems ineffective for high throughput. Soil sorting of clay-bearingmore » soils and building debris requires a new look at both the material handling equipment, and the radiation detection methodology. A new class of Super-Sorters has attained throughput of one hundred times that of the old designs. Higher throughput means shorter schedules which reduce costs substantially. The planning, cost, implementation, and other site considerations for these new Super-Sorters are discussed. Modern soil segregation was developed by Ed Bramlitt of the Defense Nuclear Agency for clean up at Johnston Atoll. The process eventually became the Segmented Gate System (SGS). This system uses an array of small sodium iodide (NaI) detectors, each viewing a small volume (segment), that control a gate. The volume in the gate is approximately one kg. This system works well when the material to be processed is sand; however, when the material is wet and sticky (soils with clays) the system has difficulty moving the material through the gates. Super-Sorters are a new class of machine designed to take advantage of high throughput aggregate processing conveyors, large acquisition volumes, and large NaI detectors using gamma spectroscopy. By using commercially available material handling equipment, the system can attain processing rates of up to 400 metric tons/hr with spectrum acquisition approximately every 0.5 sec, so the acquisition volume is 50 kilograms or less. Smaller sorting volumes can be obtained with lower throughput or by re-sorting the diverted material. This equipment can also handle large objects. The use of spectroscopy systems allows several regions of- interest to be set. Super-Sorters can bring waste processing charges down to less than $30/ metric ton on smaller jobs and can save hundreds of dollars per metric ton in disposal charges. The largest effect on the overall project cost occurs during planning and implementation. The overall goal is reduction of the length of the project, which dictates the most efficient soil processing. With all sorting systems the parameters that need to be accounted for are matrix type, soil feed rate, soil pre-processing, site conditions, and regulatory issues. The soil matrix and its ability to flow are extremely crucial to operations. It is also important to consider that as conditions change (i.e., moisture), the flowability of the soil matrix will change. Many soil parameters have to be considered: cohesive strength, internal and wall friction, permeability, and bulk density as a function of consolidating pressure. Clay bearing soils have very low permeability and high cohesive strength which makes them difficult to process, especially when wet. Soil feed speed is dependent on the equipment present and the ability to move the soil in the Super-Sorter processing area. When a Super-Sorter is running at 400 metric tons per hour it is difficult to feed the system. As an example, front-end loaders with large buckets would move approximately 5-10 metric tons of material, and 400 metric tons per hour would require 50-100 bucket-loads per hour to attain. Because the flowability of the soil matrix is important, poor material is often pre-processed before it is added to the feed hopper of the 'survey' conveyor. This pre-processing can consist of a 'grizzly' to remove large objects from the soil matrix, followed screening plant to prepare the soil so that it feeds well. Hydrated lime can be added to improve material properties. Site conditions (site area, typical weather conditions, etc.) also play a large part in project planning. Downtime lengthens project schedule and costs. The system must be configured to handle weather conditions or other variables that affect throughput. The largest single factor that plays into the project design is the regulatory environment. Before a sorter can be utilized, an averaging mass must be established by the regulator(s). There currently are no standards or guidelines in this area. The differences between acquisition mass and averaging mass are very important. The acquisition mass is defined based on the acquisition time and the geometry of the detectors. The averaging mass can then be as small as the acquisition mass or as large as several hundred tons (the averaging mass is simply the sum of a number of acquisitions). It is important to define volumetric limits and any required point-source limits. Super-Sorters handle both of these types of limits simultaneously. The minimum detectable activity for Super- Sorters is a function of speed. The chart below illustrates the detection confidence level for a 0.1 {mu}Ci point source of Ra-226 vs alarm point for three different sorter process rates. The minimal detection activity and diversion volume for a Super-Sorter is also a function of the acquisition mass. The curves were collected using a 0-15 kg acquisition mass. Diversion volumes ranged from 20-30 kg for a point source diversion. Soil Super-Sorters should be considered for every D and D project where it is desirable to reduce the waste stream. A volume reduction of 1:1000 can be gained for each pass through a modern sorter, resulting in significant savings in disposal costs.« less

  13. Certified safe farm: identifying and removing hazards on the farm.

    PubMed

    Rautiainen, R H; Grafft, L J; Kline, A K; Madsen, M D; Lange, J L; Donham, K J

    2010-04-01

    This article describes the development of the Certified Safe Farm (CSF) on-farm safety review tools, characterizes the safety improvements among participating farms during the study period, and evaluates differences in background variables between low and high scoring farms. Average farm review scores on 185 study farms improved from 82 to 96 during the five-year study (0-100 scale, 85 required for CSF certification). A total of 1292 safety improvements were reported at an estimated cost of $650 per farm. A wide range of improvements were made, including adding 9 rollover protective structures (ROPS), 59 power take-off (PTO) master shields, and 207 slow-moving vehicle (SMV) emblems; improving lighting on 72 machines: placing 171 warning decals on machinery; shielding 77 moving parts; locking up 17 chemical storage areas, adding 83 lockout/tagout improvements; and making general housekeeping upgrades in 62 farm buildings. The local, trained farm reviewers and the CSF review process overall were well received by participating farmers. In addition to our earlier findings where higher farm review scores were associated with lower self-reported health outcome costs, we found that those with higher farm work hours, younger age, pork production in confinement, beef production, poultry production, and reported exposure to agrichemicals had higher farm review scores than those who did not have these characteristics. Overall, the farm review process functioned as expected. encouraging physical improvements in the farm environment, and contributing to the multi-faceted CSF intervention program.

  14. Underestimating the effects of spatial heterogeneity due to individual movement and spatial scale: infectious disease as an example

    USGS Publications Warehouse

    Cross, Paul C.; Caillaud, Damien; Heisey, Dennis M.

    2013-01-01

    Many ecological and epidemiological studies occur in systems with mobile individuals and heterogeneous landscapes. Using a simulation model, we show that the accuracy of inferring an underlying biological process from observational data depends on movement and spatial scale of the analysis. As an example, we focused on estimating the relationship between host density and pathogen transmission. Observational data can result in highly biased inference about the underlying process when individuals move among sampling areas. Even without sampling error, the effect of host density on disease transmission is underestimated by approximately 50 % when one in ten hosts move among sampling areas per lifetime. Aggregating data across larger regions causes minimal bias when host movement is low, and results in less biased inference when movement rates are high. However, increasing data aggregation reduces the observed spatial variation, which would lead to the misperception that a spatially targeted control effort may not be very effective. In addition, averaging over the local heterogeneity will result in underestimating the importance of spatial covariates. Minimizing the bias due to movement is not just about choosing the best spatial scale for analysis, but also about reducing the error associated with using the sampling location as a proxy for an individual’s spatial history. This error associated with the exposure covariate can be reduced by choosing sampling regions with less movement, including longitudinal information of individuals’ movements, or reducing the window of exposure by using repeated sampling or younger individuals.

  15. Fluvial sediments a summary of source, transportation, deposition, and measurement of sediment discharge

    USGS Publications Warehouse

    Colby, B.R.

    1963-01-01

    This paper presents a broad but undetailed picture of fluvial sediments in streams, reservoirs, and lakes and includes a discussion of the processes involved in the movement of sediment by flowing water. Sediment is fragmental material that originates from the chemical or physical disintegration of rocks. The disintegration products may have many different shapes and may range in size from large boulders to colloidal particles. In general, they retain about the same mineral composition as the parent rocks. Rock fragments become fluvial sediment when they are entrained in a stream of water. The entrainment may occur as sheet erosion from land surfaces, particularly for the fine particles, or as channel erosion after the surface runoff has accumulated in streams. Fluvial sediments move in streams as bedload (particles moving within a few particle diameters of the streambed) or as suspended sediment in the turbulent flow. The discharge of bedload varies with several factors, which may include particle size and a type of effective shear on the surface of the streambed. The discharge of suspended sediment depends partly on concentration of moving sediment near the streambed and hence on discharge of bedload. However, the concentration of fine sediment near the streambed varies widely, even for equal flows, and, therefore, the discharge of fine sediment normally cannot be computed theoretically. The discharge of suspended sediment also depends on velocity, turbulence, depth of flow, and fall velocity of the particles. In general, the coarse sediment transported by a stream moves intermittently and is discharged at a rate that depends on properties of the flow and of the sediment. If an ample supply of coarse sediment is available at the surface of the streambed, the discharge of the coarse sediment, such as sand, can be roughly computed from properties of the available sediment and of the flow. On the other hand, much of the fine sediment in a stream usually moves nearly continuously at about the velocity of the flow, and even low flows can transport large amounts of fine sediment. Hence, the discharge of fine sediments, being largely dependent on the availability of fine sediment upstream rather than on the properties of the sediment and of the flow at a cross section, can seldom be computed from properties, other than concentrations based directly on samples, that can be observed at the cross section. Sediment particles continually change their positions in the flow; some fall to the streambed, and others are removed from the bed. Sediment deposits form locally or over large areas if the volume rate at which particles settle to the bed exceeds the volume rate at which particles are removed from the bed. In general, large particles are deposited more readily than small particles, whether the point of deposition is behind a rock, on a flood plain, within a stream channel, or at the entrance to a reservoir, a lake, or the ocean. Most samplers used for sediment observations collect a water-sediment mixture from the water surface to within a few tenths of a foot of the streambed. They thus sample most of the suspended sediment, especially if the flow is deep or if the sediment is mostly fine; but they exclude the bedload and some of the suspended sediment in a layer near the streambed where the suspended-sediment concentrations are highest. Measured sediment discharges are usually based on concentrations that are averages of several individual sediment samples for a cross section. If enough average concentrations for a cross section have been determined, the measured sediment discharge can be computed by interpolating sediment concentrations between sampling times. If only occasional samples were collected, an average relation between sediment discharge and flow can be used with a flow-duration curve to compute roughly the average or the total sediment discharges for any periods of time for which the flow-duration c

  16. Weather explains high annual variation in butterfly dispersal

    PubMed Central

    Rytteri, Susu; Heikkinen, Risto K.; Heliölä, Janne; von Bagh, Peter

    2016-01-01

    Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark–release–recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79–91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. PMID:27440662

  17. Non-universal tracer diffusion in crowded media of non-inert obstacles.

    PubMed

    Ghosh, Surya K; Cherstvy, Andrey G; Metzler, Ralf

    2015-01-21

    We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer-obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer-obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer-crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.

  18. VARIATIONS IN PEAK EXPIRATORY FLOW MEASUREMENTS ASSOCIATED TO AIR POLLUTION AND ALLERGIC SENSITIZATION IN CHILDREN IN SAO PAULO, BRAZIL

    PubMed Central

    de M Correia-Deur, Joya Emilie; Claudio, Luz; Imazawa, Alice Takimoto; Eluf-Neto, Jose

    2012-01-01

    Background In the last 20 years, there has been an increase in the incidence of allergic respiratory diseases worldwide and exposure to air pollution has been discussed as one of the factors associated with this increase. The objective of this study was to investigate the effects of air pollution on peak expiratory flow (PEF) and FEV1 in children with and without allergic sensitization. Methods Ninety-six children were followed from April to July, 2004 with spirometry measurements. They were tested for allergic sensitization (IgE, skin prick test, eosinophilia) and asked about allergic symptoms. Air pollution, temperature and relative humidity data were available. Results Decrements in PEF were observed with previous 24-h average exposure to air pollution, as well as with 3 to 10 day average exposure and were associated mainly with PM10, NO2 and O3. in all three categories of allergic sensitization. Even though allergic sensitized children tended to present larger decrements in the PEF measurements they were not statistically different from the non-allergic sensitized. Decrements in FEV1 were observed mainly with previous 24-h average exposure and 3-day moving average. Conclusions Decrements in PEF associated with air pollution were observed in children independent from their allergic sensitization status. Their daily exposure to air pollution can be responsible for a chronic inflammatory process that might impair their lung growth and later their lung function in adulthood. PMID:22544523

  19. Conventional and advanced time series estimation: application to the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database, 1993-2006.

    PubMed

    Moran, John L; Solomon, Patricia J

    2011-02-01

    Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.

  20. A computationally fast, reduced model for simulating landslide dynamics and tsunamis generated by landslides in natural terrains

    NASA Astrophysics Data System (ADS)

    Mohammed, F.

    2016-12-01

    Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data for landslides and tsunamis.

  1. Advanced Recording and Preprocessing of Physiological Signals. [data processing equipment for flow measurement of blood flow by ultrasonics

    NASA Technical Reports Server (NTRS)

    Bentley, P. B.

    1975-01-01

    The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.

  2. The tangled bank of amino acids

    PubMed Central

    Pollock, David D.

    2016-01-01

    Abstract The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. PMID:27028523

  3. Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters

    NASA Astrophysics Data System (ADS)

    Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon

    2018-04-01

    In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.

  4. Striking the right chord: moving music increases psychological transportation and behavioral intentions.

    PubMed

    Strick, Madelijn; de Bruin, Hanka L; de Ruiter, Linde C; Jonkers, Wouter

    2015-03-01

    Three experiments among university students (N = 372) investigated the persuasive power of moving (i.e., intensely emotional and "chills"-evoking) music in audio-visual advertising. Although advertisers typically aim to increase elaborate processing of the message, these studies illustrate that the persuasive effect of moving music is based on increased narrative transportation ("getting lost" in the ad's story), which reduces critical processing. In Experiment 1, moving music increased transportation and some behavioral intentions (e.g., to donate money). Experiment 2 experimentally increased the salience of manipulative intent of the advertiser, and showed that moving music reduces inferences of manipulative intent, leading in turn to increased behavioral intentions. Experiment 3 tested boundary effects, and showed that moving music fails to increase behavioral intentions when the salience of manipulative intent is either extremely high (which precludes transportation) or extremely low (which precludes reduction of inferences of manipulative intent). Moving music did not increase memory performance, beliefs, and explicit attitudes, suggesting that the influence is affect-based rather cognition-based. Together, these studies illustrate that moving music reduces inferences of manipulation and increases behavioral intentions by transporting viewers into the story of the ad. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  5. Driving-forces model on individual behavior in scenarios considering moving threat agents

    NASA Astrophysics Data System (ADS)

    Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia

    2017-09-01

    The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.

  6. A study of video frame rate on the perception of moving imagery detail

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  7. System and method of reducing motion-induced noise in the optical detection of an ultrasound signal in a moving body of material

    DOEpatents

    Habeger, Jr., Charles C.; LaFond, Emmanuel F.; Brodeur, Pierre; Gerhardstein, Joseph P.

    2002-01-01

    The present invention provides a system and method to reduce motion-induced noise in the detection of ultrasonic signals in a moving sheet or body of material. An ultrasonic signal is generated in a sheet of material and a detection laser beam is moved along the surface of the material. By moving the detection laser in the same direction as the direction of movement of the sheet of material the amount of noise induced in the detection of the ultrasonic signal is reduced. The scanner is moved at approximately the same speed as the moving material. The system and method may be used for many applications, such in a paper making process or steel making process. The detection laser may be directed by a scanner. The movement of the scanner is synchronized with the anticipated arrival of the ultrasonic signal under the scanner. A photodetector may be used to determine when a ultrasonic pulse has been directed to the moving sheet of material so that the scanner may be synchronized the anticipated arrival of the ultrasonic signal.

  8. RHIC BPM SYSTEM MODIFICATIONS AND PERFORMANCE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SATOGATA, T.; CALAGA, R.; CAMERON, P.

    2005-05-16

    The RHIC beam position monitor (BPM) system provides independent average orbit and turn-by-turn (TBT) position measurements. In each ring, there are 162 measurement locations per plane (horizontal and vertical) for a total of 648 BPM planes in the RHIC machine. During 2003 and 2004 shutdowns, BPM processing electronics were moved from the RHIC tunnel to controls alcoves to reduce radiation impact, and the analog signal paths of several dozen modules were modified to eliminate gain-switching relays and improve signal stability. This paper presents results of improved system performance, including stability for interaction region beam-based alignment efforts. We also summarize performancemore » of recently-added DSP profile scan capability, and improved million-turn TBT acquisition channels for 10 Hz triplet vibration, nonlinear dynamics, and echo studies.« less

  9. Positron emission particle tracking and its application to granular media

    NASA Astrophysics Data System (ADS)

    Parker, D. J.

    2017-05-01

    Positron emission particle tracking (PEPT) is a technique for tracking a single radioactively labelled particle. Accurate 3D tracking is possible even when the particle is moving at high speed inside a dense opaque system. In many cases, tracking a single particle within a granular system provides sufficient information to determine the time-averaged behaviour of the entire granular system. After a general introduction, this paper describes the detector systems (PET scanners and positron cameras) used to record PEPT data, the techniques used to label particles, and the algorithms used to process the data. This paper concentrates on the use of PEPT for studying granular systems: the focus is mainly on work at Birmingham, but reference is also made to work from other centres, and options for wider diversification are suggested.

  10. Moving in the Anthropocene: Global reductions in terrestrial mammalian movements

    USGS Publications Warehouse

    Tucker, Marlee A.; Böhning-Gaese, Katrin; Fagan, William F.; Fryxell, John; Van Moorter, Bram; Alberts, Susan C; Ali, Abdullahi H.; Allen, Andrew M.; Attias, Nina; Avgar, Tal; Bartlam-Brooks, Hattie; Bayarbaatar, Buuveibaatar; Belant, Jerrold L.; Bertassoni, Alessandra; Beyer, Dean; Bidner, Laura; M. van Beest, Floris; Blake, Stephen; Blaum, Niels; Bracis, Chloe; Brown, Danielle; Nico de Bruyn, P. J.; Cagnacci, Francesca; Calabrese, J.M.; Camilo-Alves, Constança; Chamaillé-Jammes, Simon; Chiaradia, Andre; Davidson, Sarah C.; Dennis, Todd; DeStefano, Stephen; Diefenbach, Duane R.; Douglas-Hamilton, Iain; Fennessy, Julian; Fichtel, Claudia; Fiedler, Wolfgang; Fischer, Christina; Fischhoff, Ilya; Fleming, Christen H.; Ford, Adam T.; Fritz, Susanne A.; Gehr, Benedikt; Goheen, Jacob R.; Gurarie, Eliezer; Hebblewhite, Mark; Heurich, Marco; Mark Hewison, A.J.; Hof, Christian; Hurme, Edward; Isbell, Lynne A.; Janssen, René; Jeltsch, Florian; Kaczensky, Petra; Kane, Adam; Kappeler, Peter M.; Kauffman, Matthew J.; Kays, Roland; Kimuyu, Duncan; Koch, Flavia; Kranstauber, Bart; LaPoint, Scott; Leimgruber, Peter; Linnell, John D. C.; López-López, Pascual; Markham, A. Catherine; Mattisson, Jenny; Medici, Emilia Patricia; Mellone, Ugo; Merrill, E.; de Miranda Mourão, Guilherme; Morato, Ronaldo G.; Morellet, Nicolas; Morrison, Thomas A.; Díaz-Muñoz, Samuel L.; Mysterud, Atle; Nandintsetseg, Dejid; Nathan, Ran; Niamir, Aidin; Odden, John; O'Hara, Robert B.; Oliveira-Santos, Luiz G. R.; Olson, Kirk A.; Patterson, Bruce D.; Cunha de Paula, Rogerio; Pedrotti, Luca; Reineking, Björn; Rimmler, Martin; Rogers, T.L.; Rolandsen, Christer Moe; Rosenberry, Christopher S.; Rubenstein, Daniel I.; Safi, Kamran; Saïd, Sonia; Sapir, Nir; Sawyer, Hall; Schmidt, Niels Martin; Selva, Nuria; Sergiel, Agnieszka; Shiilegdamba, Enkhtuvshin; Silva, João Paulo; Singh, N.; Solberg, Erling J.; Spiegel, Orr; Strand, Olav; Sundaresan, S.R.; Ullmann, Wiebke; Voigt, Ulrich; Wall, J.; Wattles, David W.; Wikelski, Martin; Wilmers, Christopher C.; Wilson, Jon W.; Wittemyer, George; Zięba, Filip; Zwijacz-Kozica, Tomasz; Mueller, Thomas

    2018-01-01

    Animal movement is fundamental for ecosystem functioning and species survival, yet the effects of the anthropogenic footprint on animal movements have not been estimated across species. Using a unique GPS-tracking database of 803 individuals across 57 species, we found that movements of mammals in areas with a comparatively high human footprint were on average one-half to one-third the extent of their movements in areas with a low human footprint. We attribute this reduction to behavioral changes of individual animals and to the exclusion of species with long-range movements from areas with higher human impact. Global loss of vagility alters a key ecological trait of animals that affects not only population persistence but also ecosystem processes such as predator-prey interactions, nutrient cycling, and disease transmission.

  11. Model Studies on the Effectiveness of MBBR Reactors for the Restoration of Small Water Reservoirs

    NASA Astrophysics Data System (ADS)

    Nowak, Agata; Mazur, Robert; Panek, Ewa; Chmist, Joanna

    2018-02-01

    The authors present the Moving Bed Biofilm Reactor (MBBR) model with a quasi-continuous flow for small water reservoir restoration, characterized by high concentrations of organic pollutants. To determine the efficiency of wastewater treatment the laboratory analysis of physic-chemical parameters were conducted for the model on a semi-technical scale of 1:3. Wastewater treatment process was carried out in 24 h for 1 m3 for raw sewage. The startup period was 2 weeks for all biofilters (biological beds). Approximately 50% reduction in COD and BOD5 was obtained on average for the studied bioreactors. Significant improvements were achieved in theclarity of the treated wastewater, with the reduction of suspension by 60%. The oxygen profile has improved significantly in 7 to 9 hours of the process, and a diametric reduction in the oxidative reduction potential was recorded. A preliminary model of biological treatment effectiveness was determined based on the conducted studies. In final stages, the operation mode was set in real conditions of polluted water reservoirs.

  12. Reproducibility in Natural Language Processing: A Case Study of Two R Libraries for Mining PubMed/MEDLINE

    PubMed Central

    Cohen, K. Bretonnel; Xia, Jingbo; Roeder, Christophe; Hunter, Lawrence E.

    2018-01-01

    There is currently a crisis in science related to highly publicized failures to reproduce large numbers of published studies. The current work proposes, by way of case studies, a methodology for moving the study of reproducibility in computational work to a full stage beyond that of earlier work. Specifically, it presents a case study in attempting to reproduce the reports of two R libraries for doing text mining of the PubMed/MEDLINE repository of scientific publications. The main findings are that a rational paradigm for reproduction of natural language processing papers can be established; the advertised functionality was difficult, but not impossible, to reproduce; and reproducibility studies can produce additional insights into the functioning of the published system. Additionally, the work on reproducibility lead to the production of novel user-centered documentation that has been accessed 260 times since its publication—an average of once a day per library. PMID:29568821

  13. Forecast of severe fever with thrombocytopenia syndrome incidence with meteorological factors.

    PubMed

    Sun, Ji-Min; Lu, Liang; Liu, Ke-Ke; Yang, Jun; Wu, Hai-Xia; Liu, Qi-Yong

    2018-06-01

    Severe fever with thrombocytopenia syndrome (SFTS) is emerging and some studies reported that SFTS incidence was associated with meteorological factors, while no report on SFTS forecast models was reported up to date. In this study, we constructed and compared three forecast models using autoregressive integrated moving average (ARIMA) model, negative binomial regression model (NBM), and quasi-Poisson generalized additive model (GAM). The dataset from 2011 to 2015 were used for model construction and the dataset in 2016 were used for external validity assessment. All the three models fitted the SFTS cases reasonably well during the training process and forecast process, while the NBM model forecasted better than other two models. Moreover, we demonstrated that temperature and relative humidity played key roles in explaining the temporal dynamics of SFTS occurrence. Our study contributes to better understanding of SFTS dynamics and provides predictive tools for the control and prevention of SFTS. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Fluidic Vectoring of a Planar Incompressible Jet Flow

    NASA Astrophysics Data System (ADS)

    Mendez, Miguel Alfonso; Scelzo, Maria Teresa; Enache, Adriana; Buchlin, Jean-Marie

    2018-06-01

    This paper presents an experimental, a numerical and a theoretical analysis of the performances of a fluidic vectoring device for controlling the direction of a turbulent, bi-dimensional and low Mach number (incompressible) jet flow. The investigated design is the co-flow secondary injection with Coanda surface, which allows for vectoring angles up to 25° with no need of moving mechanical parts. A simple empirical model of the vectoring process is presented and validated via experimental and numerical data. The experiments consist of flow visualization and image processing for the automatic detection of the jet centerline; the numerical simulations are carried out solving the Unsteady Reynolds Average Navier- Stokes (URANS) closed with the k - ω SST turbulence model, using the PisoFoam solver from OpenFOAM. The experimental validation on three different geometrical configurations has shown that the model is capable of providing a fast and reliable evaluation of the device performance as a function of the operating conditions.

  15. The Meandering Margin of the Meteorological Moist Tropics

    NASA Astrophysics Data System (ADS)

    Mapes, Brian E.; Chung, Eui Seok; Hannah, Walter M.; Masunaga, Hirohiko; Wimmers, Anthony J.; Velden, Christopher S.

    2018-01-01

    Bimodally distributed column water vapor (CWV) indicates a well-defined moist regime in the Tropics, above a margin value near 48 kg m-2 in current climate (about 80% of column saturation). Maps reveal this margin as a meandering, sinuous synoptic contour bounding broad plateaus of the moist regime. Within these plateaus, convective storms of distinctly smaller convective and mesoscales occur sporadically. Satellite data composites across the poleward most margin reveal its sharpness, despite the crude averaging: precipitation doubles within 100 km, marked by both enhancement and deepening of cloudiness. Transported patches and filaments of the moist regime cause consequential precipitation events within and beyond the Tropics. Distinguishing synoptic flows that cross the margin from flows that move the margin is made possible by a novel satellite-based Lagrangian CWV tendency estimate. Climate models do not reliably reproduce the observed bimodal distribution, so studying the moist mode's maintenance processes and the margin-zone air mass transformations, guided by the Lagrangian tendency product, might importantly constrain model moist process treatments.

  16. Air Entrainment in Steady Breaking Waves

    NASA Astrophysics Data System (ADS)

    Li, C. Y.; Duncan, J. H.; Wenz, A.; Full, O. E.

    1997-11-01

    Air entrainment due to steady breaking waves generated by fully submerged hydrofoils moving at constant speed and angle of attack is investigated experimentally. Three hydrofoils with the same shape (NACA 0012) but different chords (15, 20 and 30 cm) are used with Froude scaled operating conditions to generate the breaking waves. In this way, the effect of scale due to the combined influence of surface tension and viscosity on the bubble entrainment process is investigated. The bubbles are measured from plan-view and side-view 35-mm photographs of the wake. It is found that the number and average size of the bubbles increases dramatically with scale. High-speed movies of the turbulent breaking region that rides on the forward face of the wave are also used to observe bubble entrainment events. It is found that the bubbles are entrained periodically when the leading edge of the breaking region rushes forward and plunges over a pocket of air. This plunging process appears to become more frequent and more violent as the scale of the breaker increases.

  17. REVIEW ARTICLE: Hither and yon: a review of bi-directional microtubule-based transport

    NASA Astrophysics Data System (ADS)

    Gross, Steven P.

    2004-06-01

    Active transport is critical for cellular organization and function, and impaired transport has been linked to diseases such as neuronal degeneration. Much long distance transport in cells uses opposite polarity molecular motors of the kinesin and dynein families to move cargos along microtubules. It is increasingly clear that many cargos are moved by both sets of motors, and frequently reverse course. This review compares this bi-directional transport to the more well studied uni-directional transport. It discusses some bi-directionally moving cargos, and critically evaluates three different physical models for how such transport might occur. It then considers the evidence for the number of active motors per cargo, and how the net or average direction of transport might be controlled. The likelihood of a complex linking the activities of kinesin and dynein is also discussed. The paper concludes by reviewing elements of apparent universality between different bi-directionally moving cargos and by briefly considering possible reasons for the existence of bi-directional transport.

  18. Random walk of passive tracers among randomly moving obstacles.

    PubMed

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  19. On the Trend of the Annual Mean, Maximum, and Minimum Temperature and the Diurnal Temperature Range in the Armagh Observatory, Northern Ireland, Dataset, 1844 -2012

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2013-01-01

    Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.

  20. Biased Brownian motion mechanism for processivity and directionality of single-headed myosin-VI.

    PubMed

    Iwaki, Mitsuhiro; Iwane, Atsuko Hikikoshi; Ikebe, Mitsuo; Yanagida, Toshio

    2008-01-01

    Conventional form to function as a vesicle transporter is not a 'single molecule' but a coordinated 'two molecules'. The coordinated two molecules make it complicated to reveal its mechanism. To overcome the difficulty, we adopted a single-headed myosin-VI as a model protein. Myosin-VI is an intracellular vesicle and organelle transporter that moves along actin filaments in a direction opposite to most other known myosin classes. The myosin-VI was expected to form a dimer to move processively along actin filaments with a hand-over-hand mechanism like other myosin organelle transporters. However, wild-type myosin-VI was demonstrated to be monomer and single-headed, casting doubt on its processivity. Using single molecule techniques, we show that green fluorescent protein (GFP)-fused single-headed myosin-VI does not move processively. However, when coupled to a 200 nm polystyrene bead (comparable to an intracellular vesicle in size) at a ratio of one head per bead, single-headed myosin-VI moves processively with large (40 nm) steps. Furthermore, we found that a single-headed myosin-VI-bead complex moved more processively in a high-viscous solution (40-fold higher than water) similar to cellular environment. Because diffusion of the bead is 60-fold slower than myosin-VI heads alone in water, we propose a model in which the bead acts as a diffusional anchor for the myosin-VI, enhancing the head's rebinding following detachment and supporting processive movement of the bead-monomer complex. This investigation will help us understand how molecular motors utilize Brownian motion in cells.

  1. A 12-Year Analysis of Nonbattle Injury Among US Service Members Deployed to Iraq and Afghanistan.

    PubMed

    Le, Tuan D; Gurney, Jennifer M; Nnamani, Nina S; Gross, Kirby R; Chung, Kevin K; Stockinger, Zsolt T; Nessen, Shawn C; Pusateri, Anthony E; Akers, Kevin S

    2018-05-30

    Nonbattle injury (NBI) among deployed US service members increases the burden on medical systems and results in high rates of attrition, affecting the available force. The possible causes and trends of NBI in the Iraq and Afghanistan wars have, to date, not been comprehensively described. To describe NBI among service members deployed to Iraq and Afghanistan, quantify absolute numbers of NBIs and proportion of NBIs within the Department of Defense Trauma Registry, and document the characteristics of this injury category. In this retrospective cohort study, data from the Department of Defense Trauma Registry on 29 958 service members injured in Iraq and Afghanistan from January 1, 2003, through December 31, 2014, were obtained. Injury incidence, patterns, and severity were characterized by battle injury and NBI. Trends in NBI were modeled using time series analysis with autoregressive integrated moving average and the weighted moving average method. Statistical analysis was performed from January 1, 2003, to December 31, 2014. Primary outcomes were proportion of NBIs and the changes in NBI over time. Among 29 958 casualties (battle injury and NBI) analyzed, 29 003 were in men and 955 were in women; the median age at injury was 24 years (interquartile range, 21-29 years). Nonbattle injury caused 34.1% of total casualties (n = 10 203) and 11.5% of all deaths (206 of 1788). Rates of NBI were higher among women than among men (63.2% [604 of 955] vs 33.1% [9599 of 29 003]; P < .001) and in Operation New Dawn (71.0% [298 of 420]) and Operation Iraqi Freedom (36.3% [6655 of 18 334]) compared with Operation Enduring Freedom (29.0% [3250 of 11 204]) (P < .001). A higher proportion of NBIs occurred in members of the Air Force (66.3% [539 of 810]) and Navy (48.3% [394 of 815]) than in members of the Army (34.7% [7680 of 22 154]) and Marine Corps (25.7% [1584 of 6169]) (P < .001). Leading mechanisms of NBI included falls (2178 [21.3%]), motor vehicle crashes (1921 [18.8%]), machinery or equipment accidents (1283 [12.6%]), blunt objects (1107 [10.8%]), gunshot wounds (728 [7.1%]), and sports (697 [6.8%]), causing predominantly blunt trauma (7080 [69.4%]). The trend in proportion of NBIs did not decrease over time, remaining at approximately 35% (by weighted moving average) after 2006 and approximately 39% by autoregressive integrated moving average. Assuming stable battlefield conditions, the autoregressive integrated moving average model estimated that the proportion of NBIs from 2015 to 2022 would be approximately 41.0% (95% CI, 37.8%-44.3%). In this study, approximately one-third of injuries during the Iraq and Afghanistan wars resulted from NBI, and the proportion of NBIs was steady for 12 years. Understanding the possible causes of NBI during military operations may be useful to target protective measures and safety interventions, thereby conserving fighting strength on the battlefield.

  2. In-Line Monitoring of a Pharmaceutical Pan Coating Process by Optical Coherence Tomography.

    PubMed

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Buchsbaum, Andreas; Pescod, Russel; Baele, Thomas; Khinast, Johannes G

    2015-08-01

    This work demonstrates a new in-line measurement technique for monitoring the coating growth of randomly moving tablets in a pan coating process. In-line quality control is performed by an optical coherence tomography (OCT) sensor allowing nondestructive and contact-free acquisition of cross-section images of film coatings in real time. The coating thickness can be determined directly from these OCT images and no chemometric calibration models are required for quantification. Coating thickness measurements are extracted from the images by a fully automated algorithm. Results of the in-line measurements are validated using off-line OCT images, thickness calculations from tablet dimension measurements, and weight gain measurements. Validation measurements are performed on sample tablets periodically removed from the process during production. Reproducibility of the results is demonstrated by three batches produced under the same process conditions. OCT enables a multiple direct measurement of the coating thickness on individual tablets rather than providing the average coating thickness of a large number of tablets. This gives substantially more information about the coating quality, that is, intra- and intertablet coating variability, than standard quality control methods. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  3. Commercial vehicle fleet management and information systems. Phase 1 : interim report

    DOT National Transportation Integrated Search

    1998-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  4. Holistic Processing of Static and Moving Faces

    ERIC Educational Resources Information Center

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  5. GEMAS: Spatial pattern analysis of Ni by using digital image processing techniques on European agricultural soil data

    NASA Astrophysics Data System (ADS)

    Jordan, Gyozo; Petrik, Attila; De Vivo, Benedetto; Albanese, Stefano; Demetriades, Alecos; Sadeghi, Martiya

    2017-04-01

    Several studies have investigated the spatial distribution of chemical elements in topsoil (0-20 cm) within the framework of the EuroGeoSurveys Geochemistry Expert Group's 'Geochemical Mapping of Agricultural and Grazing Land Soil' project . Most of these studies used geostatistical analyses and interpolated concentration maps, Exploratory and Compositional Data and Analysis to identify anomalous patterns. The objective of our investigation is to demonstrate the use of digital image processing techniques for reproducible spatial pattern recognition and quantitative spatial feature characterisation. A single element (Ni) concentration in agricultural topsoil is used to perform the detailed spatial analysis, and to relate these features to possible underlying processes. In this study, simple univariate statistical methods were implemented first, and Tukey's inner-fence criterion was used to delineate statistical outliers. The linear and triangular irregular network (TIN) interpolation was used on the outlier-free Ni data points, which was resampled to a 10*10 km grid. Successive moving average smoothing was applied to generalise the TIN model and to suppress small- and at the same time enhance significant large-scale features of Nickel concentration spatial distribution patterns in European topsoil. The TIN map smoothed with a moving average filter revealed the spatial trends and patterns without losing much detail, and it was used as the input into digital image processing, such as local maxima and minima determination, digital cross sections, gradient magnitude and gradient direction calculation, second derivative profile curvature calculation, edge detection, local variability assessment, lineament density and directional variogram analyses. The detailed image processing analysis revealed several NE-SW, E-W and NW-SE oriented elongated features, which coincide with different spatial parameter classes and alignment with local maxima and minima. The NE-SW oriented linear pattern is the dominant feature to the south of the last glaciation limit. Some of these linear features are parallel to the suture zone of the Iapetus Ocean, while the others follow the Alpine and Carpathian Chains. The highest variability zones of Ni concentration in topsoil are located in the Alps and in the Balkans where mafic and ultramafic rocks outcrop. The predominant NE-SW oriented pattern is also captured by the strong anisotropy in the semi-variograms in this direction. A single major E-W oriented north-facing feature runs along the southern border of the last glaciation zone. This zone also coincides with a series of local maxima in Ni concentration along the glaciofluvial deposits. The NW-SE elongated spatial features are less dominant and are located in the Pyrenees and Scandinavia. This study demonstrates the efficiency of systematic image processing analysis in identifying and characterising spatial geochemical patterns that often remain uncovered by the usual visual map interpretation techniques.

  6. Natural, but not artificial, facial movements elicit the left visual field bias in infant face scanning

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Wheeler, Andrea; Pascalis, Olivier; Lee, Kang

    2014-01-01

    A left visual field (LVF) bias has been consistently reported in eye movement patterns when adults look at face stimuli, which reflects hemispheric lateralization of face processing and eye movements. However, the emergence of the LVF attentional bias in infancy is less clear. The present study investigated the emergence and development of the LVF attentional bias in infants from 3 to 9 months of age with moving face stimuli. We specifically examined the naturalness of facial movements in infants’ LVF attentional bias by comparing eye movement patterns in naturally and artificially moving faces. Results showed that 3- to 5-month-olds exhibited the LVF attentional bias only in the lower half of naturally moving faces, but not in artificially moving faces. Six- to 9-month-olds showed the LVF attentional bias in both the lower and upper face halves only in naturally moving, but not in artificially moving faces. These results suggest that the LVF attentional bias for face processing may emerge around 3 months of age and is driven by natural facial movements. The LVF attentional bias reflects the role of natural face experience in real life situations that may drive the development of hemispheric lateralization of face processing in infancy. PMID:25064049

  7. Montane forest ecotones moved downslope in northeastern USA in spite of warming between 1984 and 2011.

    PubMed

    Foster, Jane R; D'Amato, Anthony W

    2015-12-01

    Ecotones are transition zones that form, in forests, where distinct forest types meet across a climatic gradient. In mountains, ecotones are compressed and act as potential harbingers of species shifts that accompany climate change. As the climate warms in New England, USA, high-elevation boreal forests are expected to recede upslope, with northern hardwood species moving up behind. Yet recent empirical studies present conflicting findings on this dynamic, reporting both rapid upward ecotonal shifts and concurrent increases in boreal species within the region. These discrepancies may result from the limited spatial extent of observations. We developed a method to model and map the montane forest ecotone using Landsat imagery to observe change at scales not possible for plot-based studies, covering mountain peaks over 39 000 km(2) . Our results show that ecotones shifted downward or stayed stable on most mountains between 1991 and 2010, but also shifted upward in some cases (13-15% slopes). On average, upper ecotone boundaries moved down -1.5 m yr(-1) in the Green Mountains, VT, and -1.3 m yr(-1) in the White Mountains, NH. These changes agree with remeasured forest inventory data from Hubbard Brook Experimental Forest, NH, and suggest that processes of boreal forest recovery from prior red spruce decline, or human land use and disturbance, may swamp out any signal of climate-mediated migration in this ecosystem. This approach represents a powerful framework for evaluating similar ecotonal dynamics in other mountainous regions of the globe. © 2015 John Wiley & Sons Ltd.

  8. North Atlantic Basin Tropical Cyclone Activity in Relation to Temperature and Decadal- Length Oscillation Patterns

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2009-01-01

    Yearly frequencies of North Atlantic basin tropical cyclones, their locations of origin, peak wind speeds, average peak wind speeds, lowest pressures, and average lowest pressures for the interval 1950-2008 are examined. The effects of El Nino and La Nina on the tropical cyclone parametric values are investigated. Yearly and 10-year moving average (10-yma) values of tropical cyclone parameters are compared against those of temperature and decadal-length oscillation, employing both linear and bi-variate analysis, and first differences in the 10-yma are determined. Discussion of the 2009 North Atlantic basin hurricane season, updating earlier results, is given.

  9. Chess-playing epilepsy: a case report with video-EEG and back averaging.

    PubMed

    Mann, M W; Gueguen, B; Guillou, S; Debrand, E; Soufflet, C

    2004-12-01

    A patient suffering from juvenile myoclonic epilepsy experienced myoclonic jerks, fairly regularly, while playing chess. The myoclonus appeared particularly when he had to plan his strategy, to choose between two solutions or while raising the arm to move a chess figure. Video-EEG-polygraphy was performed, with back averaging of the myoclonus registered during a chess match and during neuropsychological testing with Kohs cubes. The EEG spike wave complexes were localised in the fronto-central region. [Published with video sequences].

  10. NGEE Arctic Plant Traits: Shrub Transects, Kougarok Road Mile Marker 64, Seward Peninsula, Alaska, 2016

    DOE Data Explorer

    Verity Salmon; Colleen Iversen; Peter Thornton; Ma

    2017-03-01

    Transect data is from point center quarter surveys for shrub density performed in July 2016 at the Kougarok hill slope located at Kougarok Road, Mile Marker 64. For each sample point along the transects, moving averages for shrub density and shrub basal area are provided along with GPS coordinates, average shrub height and active layer depth. The individual height, basal area, and species of surveyed shrubs are also included. Data upload will be completed January 2017.

  11. A Posteriori Quantification of Rate-Controlling Effects from High-Intensity Turbulence-Flame Interactions Using 4D Measurements

    DTIC Science & Technology

    2016-11-22

    Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e

  12. Considerations for monitoring raptor population trends based on counts of migrants

    USGS Publications Warehouse

    Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.

    1989-01-01

    Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.

  13. Acute effects of PM2.5 on lung function parameters in schoolchildren in Nanjing, China: a panel study.

    PubMed

    Xu, Dandan; Zhang, Yi; Zhou, Lian; Li, Tiantian

    2018-03-17

    The association between exposure to ambient particulate matter (PM) and reduced lung function parameters has been reported in many works. However, few studies have been conducted in developing countries with high levels of air pollution like China, and little attention has been paid to the acute effects of short-term exposure to air pollution on lung function. The study design consisted of a panel comprising 86 children from the same school in Nanjing, China. Four measurements of lung function were performed. A mixed-effects regression model with study participant as a random effect was used to investigate the relationship between PM 2.5 and lung function. An increase in the current day, 1-day and 2-day moving average PM 2.5 concentration was associated with decreases in lung function indicators. The greatest effect of PM 2.5 on lung function was detected at 1-day moving average PM 2.5 exposure. An increase of 10 μg/m 3 in the 1-day moving average PM 2.5 concentration was associated with a 23.22 mL decrease (95% CI: 13.19, 33.25) in Forced Vital Capacity (FVC), a 18.93 mL decrease (95% CI: 9.34, 28.52) in 1-s Forced Expiratory Volume (FEV 1 ), a 29.38 mL/s decrease (95% CI: -0.40, 59.15) in Peak Expiratory Flow (PEF), and a 27.21 mL/s decrease (95% CI: 8.38, 46.04) in forced expiratory flow 25-75% (FEF 25-75% ). The effects of PM 2.5 on lung function had significant lag effects. After an air pollution event, the health effects last for several days and we still need to pay attention to health protection.

  14. Short-Term Exposure to Air Pollution and Biomarkers of Oxidative Stress: The Framingham Heart Study.

    PubMed

    Li, Wenyuan; Wilker, Elissa H; Dorans, Kirsten S; Rice, Mary B; Schwartz, Joel; Coull, Brent A; Koutrakis, Petros; Gold, Diane R; Keaney, John F; Lin, Honghuang; Vasan, Ramachandran S; Benjamin, Emelia J; Mittleman, Murray A

    2016-04-28

    Short-term exposure to elevated air pollution has been associated with higher risk of acute cardiovascular diseases, with systemic oxidative stress induced by air pollution hypothesized as an important underlying mechanism. However, few community-based studies have assessed this association. Two thousand thirty-five Framingham Offspring Cohort participants living within 50 km of the Harvard Boston Supersite who were not current smokers were included. We assessed circulating biomarkers of oxidative stress including blood myeloperoxidase at the seventh examination (1998-2001) and urinary creatinine-indexed 8-epi-prostaglandin F2α (8-epi-PGF2α) at the seventh and eighth (2005-2008) examinations. We measured fine particulate matter (PM2.5), black carbon, sulfate, nitrogen oxides, and ozone at the Supersite and calculated 1-, 2-, 3-, 5-, and 7-day moving averages of each pollutant. Measured myeloperoxidase and 8-epi-PGF2α were loge transformed. We used linear regression models and linear mixed-effects models with random intercepts for myeloperoxidase and indexed 8-epi-PGF2α, respectively. Models were adjusted for demographic variables, individual- and area-level measures of socioeconomic position, clinical and lifestyle factors, weather, and temporal trend. We found positive associations of PM2.5 and black carbon with myeloperoxidase across multiple moving averages. Additionally, 2- to 7-day moving averages of PM2.5 and sulfate were consistently positively associated with 8-epi-PGF2α. Stronger positive associations of black carbon and sulfate with myeloperoxidase were observed among participants with diabetes than in those without. Our community-based investigation supports an association of select markers of ambient air pollution with circulating biomarkers of oxidative stress. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  15. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  16. Generation of poleward moving auroral forms (PMAFs) during periods of dayside auroral oval expansions/contractions and periods when the dayside auroral oval is expanded and stable

    NASA Astrophysics Data System (ADS)

    Fasel, G. J.; Flicker, J.; Sibeck, D. G.; Alyami, M.; Angelo, A.; Aylward, R. J.; Bender, S.; Christensen, M.; Kim, J.; Kristensen, H.; Orellana, Y.; Sahin, O.; Yoon, J.; Green, D.; Sigernes, F.; Lorentzen, D. A.

    2013-12-01

    The latitude of the equatorial edge of the dayside auroral oval has been shown to vary with the direction of the IMF Bz-component. The equatorward/poleward edge of the dayside auroral oval shifts equatorward/poleward when the IMF Bz-component is negative/positive [Burch, 1973; Akasofu, 1977; Horwitz and Akasofu, 1977; Sandholt et al., 1986, 1988]. Past studies have shown that poleward-moving auroral forms (PMAFs) are a common feature during equatorward expansions of the dayside auroral oval. Horwitz and Akasofu [1977] noted a one-to-one correspondence of luminous PMAFs associated with an equatorward expansion of the dayside auroral oval. During the southward turning of the IMF Bz-component the merging rate on the dayside increases [Newell and Meng, 1987] leading to the erosion of the dayside magnetopause. The field line merging process is thought to be most efficient when the interplanetary magnetic field (IMF) Bz-component turns southward. Both Vorobjev et al. [1975] and Horwitz and Akasofu [1977] attributed these PMAFs to magnetic flux being eroded away from the dayside magnetopause and transported antisunward. Dayside poleward-moving auroral forms are also observed during periods of an expanded and stable dayside auroral oval for both northern and southern hemisphere observations [Sandholt et al., 1986, 1989, 1990; Rairden and Mende, 1989; Mende et al., 1990]. Poleward-moving auroral forms have also been observed during some dayside oval contractions but have not been discussed much in the literature. This study examines the dayside auroral oval during periods of expansion, contraction, and during periods of an expanded and stable dayside auroral oval. This statistical study will provide the following results: number of poleward-moving auroral forms that are generated during dayside auroral oval expansions/contractions and during periods of a stable and expanded dayside auroral oval, the average initial and final elevation angle of the dayside auroral oval, time for dayside auroral oval to expand or contract, and the solar wind parameters (IMF Bx, By, Bz, speed, and pressure) associated with each interval (expansion, contraction, or stable and expanded).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coruh, M; Ewell, L; Demez, N

    Purpose: To estimate the dose delivered to a moving lung tumor by proton therapy beams of different modulation types, and compare with Monte Carlo predictions. Methods: A radiology support devices (RSD) phantom was irradiated with therapeutic proton radiation beams using two different types of modulation: uniform scanning (US) and double scattered (DS). The Eclipse© dose plan was designed to deliver 1.00Gy to the isocenter of a static ∼3×3×3cm (27cc) tumor in the phantom with 100% coverage. The peak to peak amplitude of tumor motion varied from 0.0 to 2.5cm. The radiation dose was measured with an ion-chamber (CC-13) located withinmore » the tumor. The time required to deliver the radiation dose varied from an average of 65s for the DS beams to an average of 95s for the US beams. Results: The amount of radiation dose varied from 100% (both US and DS) to the static tumor down to approximately 92% for the moving tumor. The ratio of US dose to DS dose ranged from approximately 1.01 for the static tumor, down to 0.99 for the 2.5cm moving tumor. A Monte Carlo simulation using TOPAS included a lung tumor with 4.0cm of peak to peak motion. In this simulation, the dose received by the tumor varied by ∼40% as the period of this motion varied from 1s to 4s. Conclusion: The radiation dose deposited to a moving tumor was less than for a static tumor, as expected. At large (2.5cm) amplitudes, the DS proton beams gave a dose closer to the desired dose than the US beams, but equal within experimental uncertainty. TOPAS Monte Carlo simulation can give insight into the moving tumor — dose relationship. This work was supported in part by the Philips corporation.« less

  18. 32 CFR 989.29 - Force structure and unit move proposals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Force structure and unit move proposals. 989.29 Section 989.29 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE ENVIRONMENTAL PROTECTION ENVIRONMENTAL IMPACT ANALYSIS PROCESS (EIAP) § 989.29 Force structure and unit move...

  19. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  20. Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.

    PubMed

    Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B

    2015-06-01

    Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.

  1. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.

    2014-09-12

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less

  2. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    PubMed Central

    Yu, Ying; Wang, Yirui; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527

  3. The Use of an Autoregressive Integrated Moving Average Model for Prediction of the Incidence of Dysentery in Jiangsu, China.

    PubMed

    Wang, Kewei; Song, Wentao; Li, Jinping; Lu, Wu; Yu, Jiangang; Han, Xiaofeng

    2016-05-01

    The aim of this study is to forecast the incidence of bacillary dysentery with a prediction model. We collected the annual and monthly laboratory data of confirmed cases from January 2004 to December 2014. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast bacillary dysentery incidence in Jiangsu, China. The ARIMA (1, 1, 1) × (1, 1, 2)12 model fitted exactly with the number of cases during January 2004 to December 2014. The fitted model was then used to predict bacillary dysentery incidence during the period January to August 2015, and the number of cases fell within the model's CI for the predicted number of cases during January-August 2015. This study shows that the ARIMA model fits the fluctuations in bacillary dysentery frequency, and it can be used for future forecasting when applied to bacillary dysentery prevention and control. © 2016 APJPH.

  4. Medium term municipal solid waste generation prediction by autoregressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan

    2014-09-01

    Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.

  5. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    PubMed

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  6. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  7. Analysis Monthly Import of Palm Oil Products Using Box-Jenkins Model

    NASA Astrophysics Data System (ADS)

    Ahmad, Nurul F. Y.; Khalid, Kamil; Saifullah Rusiman, Mohd; Ghazali Kamardan, M.; Roslan, Rozaini; Che-Him, Norziha

    2018-04-01

    The palm oil industry has been an important component of the national economy especially the agriculture sector. The aim of this study is to identify the pattern of import of palm oil products, to model the time series using Box-Jenkins model and to forecast the monthly import of palm oil products. The method approach is included in the statistical test for verifying the equivalence model and statistical measurement of three models, namely Autoregressive (AR) model, Moving Average (MA) model and Autoregressive Moving Average (ARMA) model. The model identification of all product import palm oil is different in which the AR(1) was found to be the best model for product import palm oil while MA(3) was found to be the best model for products import palm kernel oil. For the palm kernel, MA(4) was found to be the best model. The results forecast for the next four months for products import palm oil, palm kernel oil and palm kernel showed the most significant decrease compared to the actual data.

  8. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  9. Acoustic power of a moving point source in a moving medium

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Sarris, I. I.

    1976-01-01

    The acoustic power output of a moving point-mass source in an acoustic medium which is in uniform motion and infinite in extent is examined. The acoustic medium is considered to be a homogeneous fluid having both zero viscosity and zero thermal conductivity. Two expressions for the acoustic power output are obtained based on a different definition cited in the literature for the average energy-flux vector in an acoustic medium in uniform motion. The acoustic power output of the source is found by integrating the component of acoustic intensity vector in the radial direction over the surface of an infinitely long cylinder which is within the medium and encloses the line of motion of the source. One of the power expressions is found to give unreasonable results even though the flow is uniform.

  10. Human speed perception is contrast dependent

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Thompson, Peter

    1992-01-01

    When two parallel gratings moving at the same speed are presented simultaneously, the lower-contrast grating appears slower. This misperception is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g. a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). On average, a 70 percent contrast grating must be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, the effect is largely independent of the absolute contrast level and is a quasi-linear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, relative orientation is important. Finally, the misperception of relative speed appears lessened when the stimuli to be matched are presented sequentially.

  11. Free choice in residential care for older people - A philosophical reflection.

    PubMed

    Nord, Catharina

    2016-04-01

    Free choice in elderly care services is a debated issue. Using the theoretical support of philosophers of free will, this paper explores free choice in relocation to residential care. The three dominant perspectives within this field of philosophy, libertarianism, determinism and compatibilism, are applied from the perspective of the older individual to the process of moving. Empirical data were collected through qualitative interviews with 13 older individuals who had recently moved into residential care. These individuals had made the choice to move following either a health emergency or incremental health problems. In a deterministic perspective they had no alternative to moving, which was the inevitable solution to their various personal problems. A network of people important to them assisted in the move, making the choice possible. However, post-move the interviewees' perspective had changed to a libertarian or compatibilist interpretation, whereby although the circumstances had conferred little freedom regarding the move. The interviewees reported a high degree of self-determination in the process. It appeared that in order to restore self-respect and personal agency, the older individuals had transformed their restricted choice into a choice made of free will or freer will. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Space shuttle exhaust plumes in the lower thermosphere: Advective transport and diffusive spreading

    NASA Astrophysics Data System (ADS)

    Stevens, Michael H.; Lossow, Stefan; Siskind, David E.; Meier, R. R.; Randall, Cora E.; Russell, James M.; Urban, Jo; Murtagh, Donal

    2014-02-01

    The space shuttle main engine plume deposited between 100 and 115 km altitude is a valuable tracer for global-scale dynamical processes. Several studies have shown that this plume can reach the Arctic or Antarctic to form bursts of polar mesospheric clouds (PMCs) within a few days. The rapid transport of the shuttle plume is currently not reproduced by general circulation models and is not well understood. To help delineate the issues, we present the complete satellite datasets of shuttle plume observations by the Sounding of the Atmosphere using Broadband Emission Radiometry instrument and the Sub-Millimeter Radiometer instrument. From 2002 to 2011 these two instruments observed 27 shuttle plumes in over 600 limb scans of water vapor emission, from which we derive both advective meridional transport and diffusive spreading. Each plume is deposited at virtually the same place off the United States east coast so our results are relevant to northern mid-latitudes. We find that the advective transport for the first 6-18 h following deposition depends on the local time (LT) of launch: shuttle plumes deposited later in the day (~13-22 LT) typically move south whereas they otherwise typically move north. For these younger plumes rapid transport is most favorable for launches at 6 and 18 LT, when the displacement is 10° in latitude corresponding to an average wind speed of 30 m/s. For plumes between 18 and 30 h old some show average sustained meridional speeds of 30 m/s. For plumes between 30 and 54 h old the observations suggest a seasonal dependence to the meridional transport, peaking near the beginning of year at 24 m/s. The diffusive spreading of the plume superimposed on the transport is on average 23 m/s in 24 h. The plume observations show large variations in both meridional transport and diffusive spreading so that accurate modeling requires knowledge of the winds specific to each case. The combination of transport and spreading from the STS-118 plume in August 2007 formed bright PMCs between 75 and 85°N a day after launch. These are the highest latitude Arctic PMCs formed by shuttle exhaust reported to date.

  13. [Children's Rights Council--Move Away in Divorce Panel.

    ERIC Educational Resources Information Center

    Zapf, Charles Z.

    This paper deals with the psychological processes and emotional experiences of children whose parents are going through the process of divorce and the complications posed by divorced or divorcing parents moving to new locations. The paper begins with a discussion of the implications of divorce, claiming that divorce is a change that affects…

  14. 78 FR 37645 - Self-Regulatory Organizations; The Options Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... the same day (T+3) irrespective of the move to Friday night processing and expiration dates. According... operational process and should run on Friday night for all Standard Expiration Contracts. \\8\\ For contracts... expiration dates falling after February 1, 2015. In connection with moving from Saturday to Friday night...

  15. Work, Skills and Training in the Australian Red Meat Processing Sector. A National Vocational Education and Training Research and Evaluation Program Report

    ERIC Educational Resources Information Center

    Norton, Kent; Rafferty, Mike

    2010-01-01

    Work practices in the meat-processing industry have changed in recent years. The industry has moved away from workers dressing a whole carcass towards a chain-based system, with each worker performing a single task along a moving production line. The nature of the meat-processing workforce has also changed. It is no longer dominated by seasonal…

  16. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    NASA Astrophysics Data System (ADS)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  17. Particulate matter speciation profiles for light-duty gasoline vehicles in the United States.

    PubMed

    Sonntag, Darrell B; Baldauf, Richard W; Yanca, Catherine A; Fulper, Carl R

    2014-05-01

    Representative profiles for particulate matter particles less than or equal to 2.5 microm (PM2.5) are developed from the Kansas City Light-Duty Vehicle Emissions Study for use in the US. Environmental Protection Agency (EPA) vehicle emission model, the Motor Vehicle Emission Simulator (MOVES), and for inclusion in the EPA SPECIATE database for speciation profiles. The profiles are compatible with the inputs of current photochemical air quality models, including the Community Multiscale Air Quality Aerosol Module Version 6 (AE6). The composition of light-duty gasoline PM2.5 emissions differs significantly between cold start and hot stabilized running emissions, and between older and newer vehicles, reflecting both impacts of aging/deterioration and changes in vehicle technology. Fleet-average PM2.5 profiles are estimated for cold start and hot stabilized running emission processes. Fleet-average profiles are calculated to include emissions from deteriorated high-emitting vehicles that are expected to continue to contribute disproportionately to the fleet-wide PM2.5 emissions into the future. The profiles are calculated using a weighted average of the PM2.5 composition according to the contribution of PM2.5 emissions from each class of vehicles in the on-road gasoline fleet in the Kansas City Metropolitan Statistical Area. The paper introduces methods to exclude insignificant measurements, correct for organic carbon positive artifact, and control for contamination from the testing infrastructure in developing speciation profiles. The uncertainty of the PM2.5 species fraction in each profile is quantified using sampling survey analysis methods. The primary use of the profiles is to develop PM2.5 emissions inventories for the United States, but the profiles may also be used in source apportionment, atmospheric modeling, and exposure assessment, and as a basis for light-duty gasoline emission profiles for countries with limited data. PM2.5 speciation profiles were developed from a large sample of light-duty gasoline vehicles tested in the Kansas City area. Separate PM2.5 profiles represent cold start and hot stabilized running emission processes to distinguish important differences in chemical composition. Statistical analysis was used to construct profiles that represent PM2.5 emissions from the U.S. vehicle fleet based on vehicles tested from the 2005 calendar year Kansas City metropolitan area. The profiles have been incorporated into the EPA MOVES emissions model, as well as the EPA SPECIATE database, to improve emission inventories and provide the PM2.5 chemical characterization needed by CMAQv5.0 for atmospheric chemistry modeling.

  18. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    NASA Astrophysics Data System (ADS)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  19. Organic matter degradation in a greywater recycling system using a multistage moving bed biofilm reactor (MBBR).

    PubMed

    Saidi, Assia; Masmoudi, Khaoula; Nolde, Erwin; El Amrani, Btissam; Amraoui, Fouad

    2017-12-01

    Greywater is an important non-conventional water resource which can be treated and recycled in buildings. A decentralized greywater recycling system for 223 inhabitants started operating in 2006 in Berlin, Germany. High load greywater undergoes advanced treatment in a multistage moving bed biofilm reactor (MBBR) followed by sand filtration and UV disinfection. The treated water is used safely as service water for toilet flushing. Monitoring of the organic matter degradation was pursued to describe the degradation processes in each stage and optimize the system. Results showed that organic matter reduction was achieved for the most part in the first three reactors, whereas the highest reduction rate was observed in the third reactor in terms of COD (chemical oxygen demand), dissolved organic carbon and BOD 7 (biological oxygen demand). The results also showed that the average loading rate entering the system was 3.7 kg COD/d, while the removal rate was 3.4 kg COD/d in a total bioreactor volume of 11.7 m³. In terms of BOD, the loading rate was 2.8 kg BOD/d and it was almost totally removed. This system requires little space (0.15 m²/person) and maintenance work of less than one hour per month and it shows operational stability under peak loads.

  20. Robust skin color-based moving object detection for video surveillance

    NASA Astrophysics Data System (ADS)

    Kaliraj, Kalirajan; Manimaran, Sudha

    2016-07-01

    Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.

  1. ARMA-Based SEM When the Number of Time Points T Exceeds the Number of Cases N: Raw Data Maximum Likelihood.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2003-01-01

    Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)

  2. Are U.S. Military Interventions Contagious over Time? Intervention Timing and Its Implications for Force Planning

    DTIC Science & Technology

    2013-01-01

    29 3.5. ARIMA Models , Temporal Clustering of Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.6...39 3.9. ARIMA Models ...variance across a distribution. Autoregressive integrated moving average ( ARIMA ) models are used with time-series data sets and are designed to capture

  3. Use of Time-Series, ARIMA Designs to Assess Program Efficacy.

    ERIC Educational Resources Information Center

    Braden, Jeffery P.; And Others

    1990-01-01

    Illustrates use of time-series designs for determining efficacy of interventions with fictitious data describing drug-abuse prevention program. Discusses problems and procedures associated with time-series data analysis using Auto Regressive Integrated Moving Averages (ARIMA) models. Example illustrates application of ARIMA analysis for…

  4. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  5. Neural Substrates of Processing Path and Manner Information of a Moving Event

    ERIC Educational Resources Information Center

    Wu, Denise H.; Morganti, Anne; Chatterjee, Anjan

    2008-01-01

    Languages consistently distinguish the path and the manner of a moving event in different constituents, even if the specific constituents themselves vary across languages. Children also learn to categorize moving events according to their path and manner at different ages. Motivated by these linguistic and developmental observations, we employed…

  6. Geochemistry and geohydrology of the West Decker and Big Sky coal-mining areas, southeastern Montana

    USGS Publications Warehouse

    Davis, R.E.

    1984-01-01

    In the West Decker Mine area, water levels west of the mine at post-mining equilibrium may be almost 12 feet higher than pre-mining levels. Dissolved-solids concentration in water from coal aquifers is about 1,400 milligrams per liter and from mine spoils is about 2,500 milligrams per liter. About 13 years will be required for ground water moving at an average velocity of 2 feet per day to flow from the spoils to the Tongue River Reservoir. The increase in dissolved-solids load to the reservoir due to mining will be less than 1 percent. In the Big Sky Mine area, water levels at post-mining equilibrium will closely resemble pre-mining levels. Dissolved-solids concentration in water from coal aquifers is about 2,700 milligrams per liter and from spoils is about 3,700 milligrams per liter. About 36 to 60 years will be required for ground water moving at an average velocity of 1.2 feet per day to flow from the spoils to Rosebud Creek. The average annual increase in dissolved-solids load to the creek due to mining will be about 2 percent, although a greater increase probably will occur during summer months when flow in the creek is low. (USGS)

  7. Weather explains high annual variation in butterfly dispersal.

    PubMed

    Kuussaari, Mikko; Rytteri, Susu; Heikkinen, Risto K; Heliölä, Janne; von Bagh, Peter

    2016-07-27

    Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark-release-recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79-91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. © 2016 The Author(s).

  8. Adult survival of Black-legged Kittiwakes Rissa tridactyla in a Pacific colony

    USGS Publications Warehouse

    Hatch, Scott A.; Roberts, Bay D.; Fadely, Brian S.

    1993-01-01

    Breeding Black-legged Kittiwakes Rissa tridactyla survived at a mean annual rate of 0.926 in four years at a colony in Alaska. Survival rates observed in sexed males (0.930) and females (0.937) did not differ significantly. The rate of return among nonbreeding Kittiwakes (0.839) was lower than that of known breeders, presumably because more nonbreeders moved away from the study plots where they were marked. Individual nonbreeders frequented sites up to 5 km apart on the same island, while a few established breeders moved up to 2.5 km between years. Mate retention in breeding Kittiwakes averaged 69% in three years. Among pairs that split, the cause of changing mates was about equally divided between death (46%) and divorce (54%). Average adult life expectancy was estimated at 13.0 years. Combined with annual productivity averaging 0.17 chick per nest, the observed survival was insufficient for maintaining population size. Rather, an irregular decline observed in the study colony since 1981 is consistent with the model of a closed population with little or no recruitment. Compared to their Atlantic counterparts, Pacific Kittiwakes have low productivity and high survival. The question arises whether differences reflect phenotypic plasticity or genetically determined variation in population parameters.

  9. Moving as a gift: relocation in older adulthood.

    PubMed

    Perry, Tam E

    2014-12-01

    While discussions of accessibility, mobility and activities of daily living frame relocation studies, in older adulthood, the paper explores the emotional motivation of gift giving as a rationale for moving. This ethnographic study investigates the processes of household disbandment and decision-making of older adults in the Midwestern United States relocating in post-Global Financial Crisis contexts. In this study, relationships are created and sustained through the process of moving, linking older adults (n=81), their kin (n=49), and professionals (n=46) in the Midwestern United States. Using Marcel Mauss' The Gift (1925/1990) as a theoretical lens, relocation in older adulthood is conceptualized as a gift in two ways: to one's partner, and one's kin. Partners may consider gift-giving in terms of the act of moving to appease and honor their partner. Kin who were not moving themselves were also recipients of the gift of moving. These gifts enchain others in relationships of reciprocity. However these gifts, like all gifts, are not without costs or danger, so this paper examines some of the challenges that emerge along with gift-giving. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Encephalolexianalyzer

    DOEpatents

    Altschuler, E.L.; Dowla, F.U.

    1998-11-24

    The encephalolexianalyzer uses digital signal processing techniques on electroencephalograph (EEG) brain waves to determine whether or not someone is thinking about moving, e.g., tapping their fingers, or, alternatively, whether someone is actually moving, e.g., tapping their fingers, or at rest, i.e., not moving and not thinking of moving. The mu waves measured by a pair of electrodes placed over the motor cortex are signal processed to determine the power spectrum. At rest, the peak value of the power spectrum in the 8-13 Hz range is high, while when moving or thinking of moving, the peak value of the power spectrum in the 8-13 Hz range is low. This measured change in signal power spectrum is used to produce a control signal. The encephalolexianalyzer can be used to communicate either directly using Morse code, or via a cursor controlling a remote control; the encephalolexianalyzer can also be used to control other devices. The encephalolexianalyzer will be of great benefit to people with various handicaps and disabilities, and also has enormous commercial potential, as well as being an invaluable tool for studying the brain. 14 figs.

  11. Encephalolexianalyzer

    DOEpatents

    Altschuler, Eric L.; Dowla, Farid U.

    1998-01-01

    The encephalolexianalyzer uses digital signal processing techniques on electroencephalograph (EEG) brain waves to determine whether or not someone is thinking about moving, e.g., tapping their fingers, or, alternatively, whether someone is actually moving, e.g., tapping their fingers, or at rest, i.e., not moving and not thinking of moving. The mu waves measured by a pair of electrodes placed over the motor cortex are signal processed to determine the power spectrum. At rest, the peak value of the power spectrum in the 8-13 Hz range is high, while when moving or thinking of moving, the peak value of the power spectrum in the 8-13 Hz range is low. This measured change in signal power spectrum is used to produce a control signal. The encephalolexianalyzer can be used to communicate either directly using Morse code, or via a cursor controlling a remote control; the encephalolexianalyzer can also be used to control other devices. The encephalolexianalyzer will be of great benefit to people with various handicaps and disabilities, and also has enormous commercial potential, as well as being an invaluable tool for studying the brain.

  12. Moving as a Gift: Relocation in Older Adulthood

    PubMed Central

    2014-01-01

    While discussions of accessibility, mobility and activities of daily living frame relocation studies, in older adulthood, the paper explores the emotional motivation of gift giving as a rationale for moving. This ethnographic study investigates the processes of household disbandment and decision-making of older adults in the Midwestern United States relocating in post-Global Financial Crisis contexts. In this study, relationships are created and sustained through the process of moving, linking older adults (n=81), their kin (n=49), and professionals (n=46) in the Midwestern United States. Using Marcel Mauss’ The Gift (1925/1990) as a theoretical lens, relocation in older adulthood is conceptualized as a gift in two ways: to one’s partner, and one’s kin. Partners may consider gift-giving in terms of the act of moving to appease and honor their partner. Kin who were not moving themselves were also recipients of the gift of moving. These gifts enchain others in relationships of reciprocity. However these gifts, like all gifts, are not without costs or danger, so this paper examines some of the challenges that emerge along with gift-giving. PMID:25456616

  13. Moving standard deviation and moving sum of outliers as quality tools for monitoring analytical precision.

    PubMed

    Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-01

    An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. Ants determine their next move at rest: motor planning and causality in complex systems.

    PubMed

    Hunt, Edmund R; Baddeley, Roland J; Worley, Alan; Sendova-Franks, Ana B; Franks, Nigel R

    2016-01-01

    To find useful work to do for their colony, individual eusocial animals have to move, somehow staying attentive to relevant social information. Recent research on individual Temnothorax albipennis ants moving inside their colony's nest found a power-law relationship between a movement's duration and its average speed; and a universal speed profile for movements showing that they mostly fluctuate around a constant average speed. From this predictability it was inferred that movement durations are somehow determined before the movement itself. Here, we find similar results in lone T. albipennis ants exploring a large arena outside the nest, both when the arena is clean and when it contains chemical information left by previous nest-mates. This implies that these movement characteristics originate from the same individual neural and/or physiological mechanism(s), operating without immediate regard to social influences. However, the presence of pheromones and/or other cues was found to affect the inter-event speed correlations. Hence we suggest that ants' motor planning results in intermittent response to the social environment: movement duration is adjusted in response to social information only between movements, not during them. This environmentally flexible, intermittently responsive movement behaviour points towards a spatially allocated division of labour in this species. It also prompts more general questions on collective animal movement and the role of intermittent causation from higher to lower organizational levels in the stability of complex systems.

  15. Sulfate-reducing anaerobic ammonium oxidation as a potential treatment method for high nitrogen-content wastewater.

    PubMed

    Rikmann, Ergo; Zekker, Ivar; Tomingas, Martin; Tenno, Taavo; Menert, Anne; Loorits, Liis; Tenno, Toomas

    2012-07-01

    After sulfate-reducing ammonium oxidation (SRAO) was first assumed in 2001, several works have been published describing this process in laboratory-scale bioreactors or occurring in the nature. In this paper, the SRAO process was performed using reject water as a substrate for microorganisms and a source of NH(4) (+), with SO(4) (2-) being added as an electron acceptor. At a moderate temperature of 20°C in a moving bed biofilm reactor (MBBR) sulfate reduction along with ammonium oxidation were established. In an upflow anaerobic sludge blanket reactor (UASBR) the SRAO process took place at 36°C. Average volumetric TN removal rates of 0.03 kg-N/m³/day in the MBBR and 0.04 kg-N/m³/day in the UASBR were achieved, with long-term moderate average removal efficiencies, respectively. Uncultured bacteria clone P4 and uncultured planctomycete clone Amx-PAn30 were detected from the biofilm of the MBBR, from sludge of the UASBR uncultured Verrucomicrobiales bacterium clone De2102 and Uncultured bacterium clone ATB-KS-1929 were found also. The stoichiometrical ratio of NH(4) (+) removal was significantly higher than could be expected from the extent of SO(4) (2-) reduction. This phenomenon can primarily be attributed to complex interactions between nitrogen and sulfur compounds and organic matter present in the wastewater. The high NH(4) (+) removal ratio can be attributed to sulfur-utilizing denitrification/denitritation providing the evidence that SRAO is occurring independently and is not a result of sulfate reduction and anammox. HCO(3) (-) concentrations exceeding 1,000 mg/l were found to have an inhibiting effect on the SRAO process. Small amounts of hydrazine were naturally present in the reaction medium, indicating occurrence of the anammox process. Injections of anammox intermediates, hydrazine and hydroxylamine, had a positive effect on SRAO process performance, particularly in the case of the UASBR.

  16. Method and apparatus for coating thin foil with a boron coating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lacy, Jeffrey L.

    An apparatus and a process is disclosed for applying a boron coating to a thin foil. Preferably, the process is a continuous, in-line process for applying a coating to a thin foil comprising wrapping the foil around a rotating and translating mandrel, cleaning the foil with glow discharge in an etching chamber as the mandrel with the foil moves through the chamber, sputtering the foil with boron carbide in a sputtering chamber as the mandrel moves through the sputtering chamber, and unwinding the foil off the mandrel after it has been coated. The apparatus for applying a coating to amore » thin foil comprises an elongated mandrel. Foil preferably passes from a reel to the mandrel by passing through a seal near the initial portion of an etching chamber. The mandrel has a translation drive system for moving the mandrel forward and a rotational drive system for rotating mandrel as it moves forward. The etching chamber utilizes glow discharge on a surface of the foil as the mandrel moves through said etching chamber. A sputtering chamber, downstream of the etching chamber, applies a thin layer comprising boron onto the surface of the foil as said mandrel moves through said sputtering chamber. Preferably, the coated foil passes from the mandrel to a second reel by passing through a seal near the terminal portion of the sputtering chamber.« less

  17. User Interactive Software for Analysis of Human Physiological Data

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Toscano, William; Taylor, Bruce C.; Acharya, Soumydipta

    2006-01-01

    Ambulatory physiological monitoring has been used to study human health and performance in space and in a variety of Earth-based environments (e.g., military aircraft, armored vehicles, small groups in isolation, and patients). Large, multi-channel data files are typically recorded in these environments, and these files often require the removal of contaminated data prior to processing and analyses. Physiological data processing can now be performed with user-friendly, interactive software developed by the Ames Psychophysiology Research Laboratory. This software, which runs on a Windows platform, contains various signal-processing routines for both time- and frequency- domain data analyses (e.g., peak detection, differentiation and integration, digital filtering, adaptive thresholds, Fast Fourier Transform power spectrum, auto-correlation, etc.). Data acquired with any ambulatory monitoring system that provides text or binary file format are easily imported to the processing software. The application provides a graphical user interface where one can manually select and correct data artifacts utilizing linear and zero interpolation and adding trigger points for missed peaks. Block and moving average routines are also provided for data reduction. Processed data in numeric and graphic format can be exported to Excel. This software, PostProc (for post-processing) requires the Dadisp engineering spreadsheet (DSP Development Corp), or equivalent, for implementation. Specific processing routines were written for electrocardiography, electroencephalography, electromyography, blood pressure, skin conductance level, impedance cardiography (cardiac output, stroke volume, thoracic fluid volume), temperature, and respiration

  18. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance.

    PubMed

    Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.

  19. Nitrification of an industrial wastewater in a moving-bed biofilm reactor: effect of salt concentration.

    PubMed

    Vendramel, Simone; Dezotti, Marcia; Sant'Anna, Geraldo L

    2011-01-01

    Nitrification of wastewaters from chemical industries can pose some challenges due to the presence of inhibitory compounds. Some wastewaters, besides their organic complexity present variable levels of salt concentration. In order to investigate the effect of salt (NaCl) content on the nitrification of a conventional biologically treated industrial wastewater, a bench scale moving-bed biofilm reactor was operated on a sequencing batch mode. The wastewater presenting a chloride content of 0.05 g l(-1) was supplemented with NaCl up to 12 g Cl(-) l(-1). The reactor operation cycle was: filling (5 min), aeration (12 or 24h), settling (5 min) and drawing (5 min). Each experimental run was conducted for 3 to 6 months to address problems related to the inherent wastewater variability and process stabilization. A PLC system assured automatic operation and control of the pertinent process variables. Data obtained from selected batch experiments were adjusted by a kinetic model, which considered ammonia, nitrite and nitrate variations. The average performance results indicated that nitrification efficiency was not influenced by chloride content in the range of 0.05 to 6 g Cl(-) l(-1) and remained around 90%. When the chloride content was 12 g Cl(-) l(-1), a significant drop in the nitrification efficiency was observed, even operating with a reaction period of 24 h. Also, a negative effect of the wastewater organic matter content on nitrification efficiency was observed, which was probably caused by growth of heterotrophs in detriment of autotrophs and nitrification inhibition by residual chemicals.

  20. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  1. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  2. A Novel Mu Rhythm-based Brain Computer Interface Design that uses a Programmable System on Chip.

    PubMed

    Joshi, Rohan; Saraswat, Prateek; Gajendran, Rudhram

    2012-01-01

    This paper describes the system design of a portable and economical mu rhythm based Brain Computer Interface which employs Cypress Semiconductors Programmable System on Chip (PSoC). By carrying out essential processing on the PSoC, the use of an extra computer is eliminated, resulting in considerable cost savings. Microsoft Visual Studio 2005 and PSoC Designer 5.01 are employed in developing the software for the system, the hardware being custom designed. In order to test the usability of the BCI, preliminary testing is carried out by training three subjects who were able to demonstrate control over their electroencephalogram by moving a cursor present at the center of the screen towards the indicated direction with an average accuracy greater than 70% and a bit communication rate of up to 7 bits/min.

  3. Development of an Integrated Chip for Automatic Tracking and Positioning Manipulation for Single Cell Lysis

    PubMed Central

    Young, Chao-Wang; Hsieh, Jia-Ling; Ay, Chyung

    2012-01-01

    This study adopted a microelectromechanical fabrication process to design a chip integrated with electroosmotic flow and dielectrophoresis force for single cell lysis. Human histiocytic lymphoma U937 cells were driven rapidly by electroosmotic flow and precisely moved to a specific area for cell lysis. By varying the frequency of AC power, 15 V AC at 1 MHz of frequency configuration achieved 100% cell lysing at the specific area. The integrated chip could successfully manipulate single cells to a specific position and lysis. The overall successful rate of cell tracking, positioning, and cell lysis is 80%. The average speed of cell driving was 17.74 μm/s. This technique will be developed for DNA extraction in biomolecular detection. It can simplify pre-treatment procedures for biotechnological analysis of samples. PMID:22736957

  4. Development of an integrated chip for automatic tracking and positioning manipulation for single cell lysis.

    PubMed

    Young, Chao-Wang; Hsieh, Jia-Ling; Ay, Chyung

    2012-01-01

    This study adopted a microelectromechanical fabrication process to design a chip integrated with electroosmotic flow and dielectrophoresis force for single cell lysis. Human histiocytic lymphoma U937 cells were driven rapidly by electroosmotic flow and precisely moved to a specific area for cell lysis. By varying the frequency of AC power, 15 V AC at 1 MHz of frequency configuration achieved 100% cell lysing at the specific area. The integrated chip could successfully manipulate single cells to a specific position and lysis. The overall successful rate of cell tracking, positioning, and cell lysis is 80%. The average speed of cell driving was 17.74 μm/s. This technique will be developed for DNA extraction in biomolecular detection. It can simplify pre-treatment procedures for biotechnological analysis of samples.

  5. Vision-Based Traffic Data Collection Sensor for Automotive Applications

    PubMed Central

    Llorca, David F.; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel. A.

    2010-01-01

    This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry. PMID:22315572

  6. Vision-based traffic data collection sensor for automotive applications.

    PubMed

    Llorca, David F; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel A

    2010-01-01

    This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry.

  7. Statistical modelling of subdiffusive dynamics in the cytoplasm of living cells: A FARIMA approach

    NASA Astrophysics Data System (ADS)

    Burnecki, K.; Muszkieta, M.; Sikora, G.; Weron, A.

    2012-04-01

    Golding and Cox (Phys. Rev. Lett., 96 (2006) 098102) tracked the motion of individual fluorescently labelled mRNA molecules inside live E. coli cells. They found that in the set of 23 trajectories from 3 different experiments, the automatically recognized motion is subdiffusive and published an intriguing microscopy video. Here, we extract the corresponding time series from this video by image segmentation method and present its detailed statistical analysis. We find that this trajectory was not included in the data set already studied and has different statistical properties. It is best fitted by a fractional autoregressive integrated moving average (FARIMA) process with the normal-inverse Gaussian (NIG) noise and the negative memory. In contrast to earlier studies, this shows that the fractional Brownian motion is not the best model for the dynamics documented in this video.

  8. A Novel Mu Rhythm-based Brain Computer Interface Design that uses a Programmable System on Chip

    PubMed Central

    Joshi, Rohan; Saraswat, Prateek; Gajendran, Rudhram

    2012-01-01

    This paper describes the system design of a portable and economical mu rhythm based Brain Computer Interface which employs Cypress Semiconductors Programmable System on Chip (PSoC). By carrying out essential processing on the PSoC, the use of an extra computer is eliminated, resulting in considerable cost savings. Microsoft Visual Studio 2005 and PSoC Designer 5.01 are employed in developing the software for the system, the hardware being custom designed. In order to test the usability of the BCI, preliminary testing is carried out by training three subjects who were able to demonstrate control over their electroencephalogram by moving a cursor present at the center of the screen towards the indicated direction with an average accuracy greater than 70% and a bit communication rate of up to 7 bits/min. PMID:23493871

  9. Breakthrough bargaining.

    PubMed

    Kolb, D M; Williams, J

    2001-02-01

    Unspoken, subtle parts of a bargaining process--also known as the shadow negotiation--can set the tone for a successful negotiation. Deborah Kolb and Judith Williams, whose book The Shadow Negotiation was the starting point for this article, say there are three strategies businesspeople can use to guide these hidden interactions. Power moves are used when two negotiating parties hold unequal power--for instance, subordinates and bosses; new and existing employees; and people of different races, ages, or genders. These strategies, such as casting the status quo in an unfavorable light, can help parties realize that they must negotiate: they will be better off if they do and worse off if they don't. Process moves affect how negotiation issues are received by both sides in the process, even though they do not address substantive issues. Working outside of the actual bargaining process, one party can suggest ideas or marshal support that can shape the agenda and influence how others view the negotiation. Appreciative moves alter the tone or atmosphere so that a more collaborative exchange is possible. They shift the dynamics of the shadow negotiation away from the adversarial--helping parties to save face--and thus build trust and encourage dialogue. These strategic moves don't guarantee that all bargainers will walk away winners, but they help to get stalled negotiations moving--out of the dark of unspoken power plays and into the light of true dialogue.

  10. Processing Deficits of Motion of Contrast-Modulated Gratings in Anisometropic Amblyopia

    PubMed Central

    Liu, Zhongjian; Hu, Xiaopeng; Yu, Yong-Qiang; Zhou, Yifeng

    2014-01-01

    Several studies have indicated substantial processing deficits for static second-order stimuli in amblyopia. However, less is known about the perception of second-order moving gratings. To investigate this issue, we measured the contrast sensitivity for second-order (contrast-modulated) moving gratings in seven anisometropic amblyopes and ten normal controls. The measurements were performed with non-equated carriers and a series of equated carriers. For comparison, the sensitivity for first-order motion and static second-order stimuli was also measured. Most of the amblyopic eyes (AEs) showed reduced sensitivity for second-order moving gratings relative to their non-amblyopic eyes (NAEs) and the dominant eyes (CEs) of normal control subjects, even when the detectability of the noise carriers was carefully controlled, suggesting substantial processing deficits of motion of contrast-modulated gratings in anisometropic amblyopia. In contrast, the non-amblyopic eyes of the anisometropic amblyopes were relatively spared. As a group, NAEs showed statistically comparable performance to CEs. We also found that contrast sensitivity for static second-order stimuli was strongly impaired in AEs and part of the NAEs of anisometropic amblyopes, consistent with previous studies. In addition, some amblyopes showed impaired performance in perception of static second-order stimuli but not in that of second-order moving gratings. These results may suggest a dissociation between the processing of static and moving second-order gratings in anisometropic amblyopia. PMID:25409477

  11. Managed Moves: School and Local Authority Staff Perceptions of Processes, Success and Challenges

    ERIC Educational Resources Information Center

    Bagley, Christopher; Hallam, Susan

    2015-01-01

    The current research aimed to increase understanding of the processes of managed moves for children at risk of exclusion from school, particularly exploring what contributed to success and the nature of the challenges experienced. The study was conducted in one English local authority where 11 school staff and 5 local authority staff were…

  12. The Importance of Opening Moves in Classroom Interaction

    ERIC Educational Resources Information Center

    Ginting, Siti Aisyah

    2017-01-01

    The analysis of classroom interaction is very important in teaching learning process in order to reach the learning objectives. The purpose of this study was to describe the types of opening moves used by the teacher through the learning process. The research was carried out in Senior High School. The design of the research was descriptive…

  13. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  14. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.

    PubMed

    Ochiai, Derek H; Adib, Farshad

    2015-04-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.

  15. The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century

    PubMed Central

    Ochiai, Derek H.; Adib, Farshad

    2015-01-01

    Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE. PMID:26052490

  16. ARC-1986-AC86-7015

    NASA Image and Video Library

    1986-01-21

    4.17 million miles (2.59 million miles) Resolution : 40 km. (25mi.) P-29498C This false color, Voyager 2 composite view of all nine of Uranian rings was made from six 15 second exposures through the narrow angle camera. The special computer processing used to extract color information from the extremely dark and faint rings, causing the even fainter, pastel lines seen between the rings. Two images, each in the green, clear, & violet filters, were added together and averaged to find the proper color difference between the rings. the final image was made from these three color averages and represents an enhanced, false color view. The image shows that the brightest, or Epsilon ring, at top ,is neutral in color, with the fainter eight other rings showing color differences between them. moving down, toward, Uranus, we see the Delta, Gamma, & Eta rings in shades of blue and green; the Beta & Alpha rings in somewhat lighter tones; and then finally, a set of three, known simply as 4, 5, & 6 rings, in faint off-white tones. Scientists will use this color information to try to understand the nature and origin of the ring material.

  17. Quantitative determination of engine water ingestion

    NASA Technical Reports Server (NTRS)

    Parikh, P.; Hernan, M.; Sarohia, V.

    1986-01-01

    A nonintrusive optical technique is described for determination of liquid mass flux in a droplet laden airstream. The techniques were developed for quantitative determination of engine water ingestion resulting from heavy rain or wheel spray. Independent measurements of the liquid water content (LWC) of the droplet laden airstream and of the droplet velocities were made at the stimulated nacelle inlet plane for the liquid mass flux determination. The LWC was measured by illuminating and photographing the droplets contained within a thin slice of the flow field by means of a sheet of light from a pulsed laser. A fluorescent dye introduced in the water enchanced the droplet image definition. The droplet velocities were determined from double exposed photographs of the moving droplet field. The technique was initially applied to a steady spray generated in a wind tunnel. It was found that although the spray was initially steady, the aerodynamic breakup process was inherently unsteady. This resulted in a wide variation of the instantaneous LWC of the droplet laden airstream. The standard deviation of ten separate LWC measurements was 31% of the average. However, the liquid mass flux calculated from the average LWC and droplet velocities came within 10% of the known water ingestion rate.

  18. A univariate model of river water nitrate time series

    NASA Astrophysics Data System (ADS)

    Worrall, F.; Burt, T. P.

    1999-01-01

    Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.

  19. Real-time quantum cascade laser-based infrared microspectroscopy in-vivo

    NASA Astrophysics Data System (ADS)

    Kröger-Lui, N.; Haase, K.; Pucci, A.; Schönhals, A.; Petrich, W.

    2016-03-01

    Infrared microscopy can be performed to observe dynamic processes on a microscopic scale. Fourier-transform infrared spectroscopy-based microscopes are bound to limitations regarding time resolution, which hampers their potential for imaging fast moving systems. In this manuscript we present a quantum cascade laser-based infrared microscope which overcomes these limitations and readily achieves standard video frame rates. The capabilities of our setup are demonstrated by observing dynamical processes at their specific time scales: fermentation, slow moving Amoeba Proteus and fast moving Caenorhabditis elegans. Mid-infrared sampling rates between 30 min and 20 ms are demonstrated.

  20. How Do Changes in Speed Affect the Perception of Duration?

    ERIC Educational Resources Information Center

    Matthews, William J.

    2011-01-01

    Six experiments investigated how changes in stimulus speed influence subjective duration. Participants saw rotating or translating shapes in three conditions: constant speed, accelerating motion, and decelerating motion. The distance moved and average speed were the same in all three conditions. In temporal judgment tasks, the constant-speed…

  1. Unpacking the "Black Box" of Social Programs and Policies: Introduction

    ERIC Educational Resources Information Center

    Solmeyer, Anna R.; Constance, Nicole

    2015-01-01

    Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…

  2. Tax Breaks for Law Students.

    ERIC Educational Resources Information Center

    Button, Alan L.

    1981-01-01

    A guide to federal income tax law as it affects law students is presented. Some costs that may constitute valuable above-the-line deductions are identified: moving expenses, educational expenses, job-seeking expenses, and income averaging. Available from Washington and Lee University School of Law, Lexington, VA 24450, $5.50 sc) (MLW)

  3. Alabama's Education Report Card, 2009-2010

    ERIC Educational Resources Information Center

    Alabama Department of Education, 2011

    2011-01-01

    In a more consistent and viable manner than ever before, education in Alabama is moving toward its ultimate goal of providing every student with a quality education, thereby preparing them for work, college, and life after high school. Alabama's graduation rates from 2002 to 2008 increased significantly, tripling the national average increase and…

  4. Moving toward climate-informed agricultural decision support - can we use PRISM data for more than just monthly averages?

    USDA-ARS?s Scientific Manuscript database

    Decision support systems/models for agriculture are varied in target application and complexity, ranging from simple worksheets to near real-time forecast systems requiring significant computational and manpower resources. Until recently, most such decision support systems have been constructed with...

  5. A MOVING AVERAGE BAYESIAN MODEL FOR SPATIAL SURFACE AND COVERAGE PREDICTION FROM ENVIRONMENTAL POINT-SOURCE DATA

    EPA Science Inventory

    This paper addresses the general problem of estimating at arbitrary locations the value of an unobserved quantity that varies over space, such as ozone concentration in air or nitrate concentrations in surface groundwater, on the basis of approximate measurements of the quantity ...

  6. On the Nature of SEM Estimates of ARMA Parameters.

    ERIC Educational Resources Information Center

    Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.

    2002-01-01

    Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…

  7. A Computer Program for the Generation of ARIMA Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  8. The Mathematical Analysis of Style: A Correlation-Based Approach.

    ERIC Educational Resources Information Center

    Oppenheim, Rosa

    1988-01-01

    Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…

  9. Inhalant Use among Indiana School Children, 1991-2004

    ERIC Educational Resources Information Center

    Ding, Kele; Torabi, Mohammad R.; Perera, Bilesha; Jun, Mi Kyung; Jones-McKyer, E. Lisako

    2007-01-01

    Objective: To examine the prevalence and trend of inhalant use among Indiana public school students. Methods: The Alcohol, Tobacco, and Other Drug Use among Indiana Children and Adolescents surveys conducted annually between 1991 and 2004 were reanalyzed using 2-way moving average, Poisson regression, and ANOVA tests. Results: The prevalence had…

  10. Average pollutant concentration in soil profile simulated with Convective-Dispersive Equation. Model and Manual

    USDA-ARS?s Scientific Manuscript database

    Different parts of soil solution move with different velocities, and therefore chemicals are leached gradually from soil with infiltrating water. Solute dispersivity is the soil parameter characterizing this phenomenon. To characterize the dispersivity of soil profile at field scale, it is desirable...

  11. A review of metropolitan area early deployment plans and congestion management systems for the development of intelligent transportation systems

    DOT National Transportation Integrated Search

    1997-01-01

    The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...

  12. The Mobility of Universities

    ERIC Educational Resources Information Center

    Tanaka, Masahiro

    2009-01-01

    This paper notes that universities are mobile. That is, models of universities are transferred or borrowed or move around the world and in the process of moving or being moved they tend to change or be changed from the kind of university they were--either in practice or as ideals at the point of origin. To explore these themes the article…

  13. Motives for Residential Mobility in Later Life: Post-Move Perspectives of Elders and Family Members

    ERIC Educational Resources Information Center

    Sergeant, Julie F.; Ekerdt, David J.

    2008-01-01

    This qualitative study delineates motives for residential mobility, describes dynamics between the elder and family members during the move decision process, and locates the move decision within ecological layers of the aging context. Interviews were conducted with 30 individuals and couples (ages 60-87) who experienced a community-based move…

  14. Counterintuitive and Alternative Moves Choice in the Water Jug Task

    ERIC Educational Resources Information Center

    Carder, Hassina P.; Handley, Simon J.; Perfect, Timothy J.

    2008-01-01

    MOVE problems, like the Tower of London (TOL) or the Water Jug (WJ) task, are planning tasks that appear structurally similar and are assumed to involve similar cognitive processes. Carder et al. [Carder, H.P., Handley, S.J., & Perfect, T.J. ( 2004). Deconstructing the Tower of London: Alternative moves and conflict resolution as predictors of…

  15. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  16. Cloud motion in relation to the ambient wind field

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Scoggins, J. R.

    1975-01-01

    Trajectories of convective clouds were computed from a mathematical model and compared with trajectories observed by radar. The ambient wind field was determined from the AVE IIP data. The model includes gradient, coriolis, drag, lift, and lateral forces. The results show that rotational effects may account for large differences between the computed and observed trajectories and that convective clouds may move 10 to 20 degrees to the right or left of the average wind vector and at speeds 5 to 10 m/sec faster or slower than the average ambient wind speed.

  17. Finding the average speed of a light-emitting toy car with a smartphone light sensor

    NASA Astrophysics Data System (ADS)

    Kapucu, Serkan

    2017-07-01

    This study aims to demonstrate how the average speed of a light-emitting toy car may be determined using a smartphone’s light sensor. The freely available Android smartphone application, ‘AndroSensor’, was used for the experiment. The classroom experiment combines complementary physics knowledge of optics and kinematics to find the average speed of a moving object. The speed of the toy car is found by determining the distance between the light-emitting toy car and the smartphone, and the time taken to travel these distances. To ensure that the average speed of the toy car calculated with the help of the AndroSensor was correct, the average speed was also calculated by analyzing video-recordings of the toy car. The resulting speeds found with these different methods were in good agreement with each other. Hence, it can be concluded that reliable measurements of the average speed of light-emitting objects can be determined with the help of the light sensor of an Android smartphone.

  18. Amodal completion of moving objects by pigeons.

    PubMed

    Nagasaka, Yasuo; Wasserman, Edward A

    2008-01-01

    In a series of four experiments, we explored whether pigeons complete partially occluded moving shapes. Four pigeons were trained to discriminate between a complete moving shape and an incomplete moving shape in a two-alternative forced-choice task. In testing, the birds were presented with a partially occluded moving shape. In experiment 1, none of the pigeons appeared to complete the testing stimulus; instead, they appeared to perceive the testing stimulus as incomplete fragments. However, in experiments 2, 3, and 4, three of the birds appeared to complete the partially occluded moving shapes. These rare positive results suggest that motion may facilitate amodal completion by pigeons, perhaps by enhancing the figure - ground segregation process.

  19. Relative distance between tracers as a measure of diffusivity within moving aggregates

    NASA Astrophysics Data System (ADS)

    Pönisch, Wolfram; Zaburdaev, Vasily

    2018-02-01

    Tracking of particles, be it a passive tracer or an actively moving bacterium in the growing bacterial colony, is a powerful technique to probe the physical properties of the environment of the particles. One of the most common measures of particle motion driven by fluctuations and random forces is its diffusivity, which is routinely obtained by measuring the mean squared displacement of the particles. However, often the tracer particles may be moving in a domain or an aggregate which itself experiences some regular or random motion and thus masks the diffusivity of tracers. Here we provide a method for assessing the diffusivity of tracer particles within mobile aggregates by measuring the so-called mean squared relative distance (MSRD) between two tracers. We provide analytical expressions for both the ensemble and time averaged MSRD allowing for direct identification of diffusivities from experimental data.

  20. Laser ablation for the synthesis of carbon nanotubes

    DOEpatents

    Holloway, Brian C; Eklund, Peter C; Smith, Michael W; Jordan, Kevin C; Shinn, Michelle

    2012-11-27

    Single walled carbon nanotubes are produced in a novel apparatus by the laser-induced ablation of moving carbon target. The laser used is of high average power and ultra-fast pulsing. According to various preferred embodiments, the laser produces and output above about 50 watts/cm.sup.2 at a repetition rate above about 15 MHz and exhibits a pulse duration below about 10 picoseconds. The carbon, carbon/catalyst target and the laser beam are moved relative to one another and a focused flow of "side pumped", preheated inert gas is introduced near the point of ablation to minimize or eliminate interference by the ablated plume by removal of the plume and introduction of new target area for incidence with the laser beam. When the target is moved relative to the laser beam, rotational or translational movement may be imparted thereto, but rotation of the target is preferred.

Top