NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
NASA Astrophysics Data System (ADS)
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
NASA Astrophysics Data System (ADS)
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
[A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].
Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y
2016-04-18
To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.
Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Buckingham-Jeffery, Elizabeth; Morbey, Roger; House, Thomas; Elliot, Alex J; Harcourt, Sally; Smith, Gillian E
2017-05-19
As service provision and patient behaviour varies by day, healthcare data used for public health surveillance can exhibit large day of the week effects. These regular effects are further complicated by the impact of public holidays. Real-time syndromic surveillance requires the daily analysis of a range of healthcare data sources, including family doctor consultations (called general practitioners, or GPs, in the UK). Failure to adjust for such reporting biases during analysis of syndromic GP surveillance data could lead to misinterpretations including false alarms or delays in the detection of outbreaks. The simplest smoothing method to remove a day of the week effect from daily time series data is a 7-day moving average. Public Health England developed the working day moving average in an attempt also to remove public holiday effects from daily GP data. However, neither of these methods adequately account for the combination of day of the week and public holiday effects. The extended working day moving average was developed. This is a further data-driven method for adding a smooth trend curve to a time series graph of daily healthcare data, that aims to take both public holiday and day of the week effects into account. It is based on the assumption that the number of people seeking healthcare services is a combination of illness levels/severity and the ability or desire of patients to seek healthcare each day. The extended working day moving average was compared to the seven-day and working day moving averages through application to data from two syndromic indicators from the GP in-hours syndromic surveillance system managed by Public Health England. The extended working day moving average successfully smoothed the syndromic healthcare data by taking into account the combined day of the week and public holiday effects. In comparison, the seven-day and working day moving averages were unable to account for all these effects, which led to misleading smoothing curves. The results from this study make it possible to identify trends and unusual activity in syndromic surveillance data from GP services in real-time independently of the effects caused by day of the week and public holidays, thereby improving the public health action resulting from the analysis of these data.
Quantifying rapid changes in cardiovascular state with a moving ensemble average.
Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T
2018-04-01
MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.
Detrending moving average algorithm for multifractals
NASA Astrophysics Data System (ADS)
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick
2009-08-01
Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
A new image segmentation method based on multifractal detrended moving average analysis
NASA Astrophysics Data System (ADS)
Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le
2015-08-01
In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
Motile and non-motile sperm diagnostic manipulation using optoelectronic tweezers.
Ohta, Aaron T; Garcia, Maurice; Valley, Justin K; Banie, Lia; Hsu, Hsan-Yin; Jamshidi, Arash; Neale, Steven L; Lue, Tom; Wu, Ming C
2010-12-07
Optoelectronic tweezers was used to manipulate human spermatozoa to determine whether their response to OET predicts sperm viability among non-motile sperm. We review the electro-physical basis for how live and dead human spermatozoa respond to OET. The maximal velocity that non-motile spermatozoa could be induced to move by attraction or repulsion to a moving OET field was measured. Viable sperm are attracted to OET fields and can be induced to move at an average maximal velocity of 8.8 ± 4.2 µm s(-1), while non-viable sperm are repelled to OET, and are induced to move at an average maximal velocity of -0.8 ± 1.0 µm s(-1). Manipulation of the sperm using OET does not appear to result in increased DNA fragmentation, making this a potential method by which to identify viable non-motile sperm for assisted reproductive technologies.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Mechanistic approach to generalized technical analysis of share prices and stock market indices
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2002-05-01
Classical technical analysis methods of stock evolution are recalled, i.e. the notion of moving averages and momentum indicators. The moving averages lead to define death and gold crosses, resistance and support lines. Momentum indicators lead the price trend, thus give signals before the price trend turns over. The classical technical analysis investment strategy is thereby sketched. Next, we present a generalization of these tricks drawing on physical principles, i.e. taking into account not only the price of a stock but also the volume of transactions. The latter becomes a time dependent generalized mass. The notion of pressure, acceleration and force are deduced. A generalized (kinetic) energy is easily defined. It is understood that the momentum indicators take into account the sign of the fluctuations, while the energy is geared toward the absolute value of the fluctuations. They have different patterns which are checked by searching for the crossing points of their respective moving averages. The case of IBM evolution over 1990-2000 is used for illustrations.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Gauging the Nearness and Size of Cycle Maximum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2003-01-01
A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center
2009-06-01
use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for
Forecasting daily meteorological time series using ARIMA and regression models
NASA Astrophysics Data System (ADS)
Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir
2018-04-01
The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
Video-Assisted Thoracic Surgical Lobectomy for Lung Cancer: Description of a Learning Curve.
Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Cao, Shiqi; Cao, Yongke
2017-07-01
Video-assisted thoracic surgical (VATS) lobectomy is gaining popularity in the treatment of lung cancer. The aim of this study is to investigate the learning curve of VATS lobectomy by using multidimensional methods and to compare the learning curve groups with respect to perioperative clinical outcomes. We retrospectively reviewed a prospective database to identify 67 consecutive patients who underwent VATS lobectomy for lung cancer by a single surgeon. The learning curve was analyzed by using moving average and the cumulative sum (CUSUM) method. With the moving average and CUSUM analyses for the operation time, patients were stratified into two groups, with chronological order defining early and late experiences. Perioperative clinical outcomes were compared between the two learning curve groups. According to the moving average method, the peak point for operation time occurred at the 26th case. The CUSUM method also showed the operation time peak point at the 26th case. When results were compared between early- and late-experience periods, the operation time, duration of chest drainage, and postoperative hospital stay were significantly longer in the early-experience group (cases 1 to 26). The intraoperative estimated blood loss was significantly less in the late-experience group (cases 27 to 67). CUSUM charts showed a decreasing duration of chest drainage after the 36th case and shortening postoperative hospital stay after the 37th case. Multidimensional statistical analyses suggested that the learning curve for VATS lobectomy for lung cancer required ∼26 cases. Favorable intraoperative and postoperative care parameters for VATS lobectomy were observed in the late-experience group.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach.
Elgendi, Mohamed
2016-11-02
Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages ("TERMA") involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages ( W 1 and W 2 ) have to follow the inequality ( 8 × W 1 ) ≥ W 2 ≥ ( 2 × W 1 ) . Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions.
THE SEDIMENTATION PROPERTIES OF THE SKIN-SENSITIZING ANTIBODIES OF RAGWEED-SENSITIVE PATIENTS
Andersen, Burton R.; Vannier, Wilton E.
1964-01-01
The sedimentation coefficients of the skin-sensitizing antibodies to ragweed were evaluated by the moving partition cell method and the sucrose density gradient method. The most reliable results were obtained by sucrose density gradient ultracentrifugation which showed that the major portion of skin-sensitizing antibodies to ragweed sediment with an average value of 7.7S (7.4 to 7.9S). This is about one S unit faster than γ-globulins (6.8S). The data from the moving partition cell method are in agreement with these results. Our studies failed to demonstrate heterogeneity of the skin-sensitizing antibodies with regard to sedimentation rate. PMID:14194391
NASA Astrophysics Data System (ADS)
Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan
2018-06-01
In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.
A New Trend-Following Indicator: Using SSA to Design Trading Rules
NASA Astrophysics Data System (ADS)
Leles, Michel Carlo Rodrigues; Mozelli, Leonardo Amaral; Guimarães, Homero Nogueira
Singular Spectrum Analysis (SSA) is a non-parametric approach that can be used to decompose a time-series as trends, oscillations and noise. Trend-following strategies rely on the principle that financial markets move in trends for an extended period of time. Moving Averages (MAs) are the standard indicator to design such strategies. In this study, SSA is used as an alternative method to enhance trend resolution in comparison with the traditional MA. New trading rules using SSA as indicator are proposed. This paper shows that for the Down Jones Industrial Average (DJIA) and Shangai Securities Composite Index (SSCI) time-series the SSA trading rules provided, in general, better results in comparison to MA trading rules.
Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; Essa, K. S.
2015-02-01
We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.
Robust Semi-Active Ride Control under Stochastic Excitation
2014-01-01
broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
[A peak recognition algorithm designed for chromatographic peaks of transformer oil].
Ou, Linjun; Cao, Jian
2014-09-01
In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Permeation of limonene through disposable nitrile gloves using a dextrous robot hand
Banaee, Sean; S Que Hee, Shane
2017-01-01
Objectives: The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). Methods: This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). Results: The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm2/min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Conclusions: Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving. PMID:28111415
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2012 CFR
2012-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2014 CFR
2014-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2013 CFR
2013-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
26 CFR 1.448-2 - Nonaccrual of certain amounts by service providers.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-experience method is not allowed. (3) Safe harbor 3: modified Black Motor method. A taxpayer may use a... accounts receivable balance at the end of the current taxable year by a percentage (modified Black Motor... modified Black Motor moving average percentage is computed by dividing the total bad debts sustained...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
TERMA Framework for Biomedical Signal Analysis: An Economic-Inspired Approach
Elgendi, Mohamed
2016-01-01
Biomedical signals contain features that represent physiological events, and each of these events has peaks. The analysis of biomedical signals for monitoring or diagnosing diseases requires the detection of these peaks, making event detection a crucial step in biomedical signal processing. Many researchers have difficulty detecting these peaks to investigate, interpret and analyze their corresponding events. To date, there is no generic framework that captures these events in a robust, efficient and consistent manner. A new method referred to for the first time as two event-related moving averages (“TERMA”) involves event-related moving averages and detects events in biomedical signals. The TERMA framework is flexible and universal and consists of six independent LEGO building bricks to achieve high accuracy detection of biomedical events. Results recommend that the window sizes for the two moving averages (W1 and W2) have to follow the inequality (8×W1)≥W2≥(2×W1). Moreover, TERMA is a simple yet efficient event detector that is suitable for wearable devices, point-of-care devices, fitness trackers and smart watches, compared to more complex machine learning solutions. PMID:27827852
High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.
Haxton, Thomas K
2015-03-10
We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
Techniques and computations for mapping plot clusters that straddle stand boundaries
Charles T. Scott; William A. Bechtold
1995-01-01
Many regional (extensive) forest surveys use clusters of subplots or prism points to reduce survey costs. Two common methods of handling clusters that straddle stand boundaries entail: (1) moving all subplots into a single forest cover type, or (2)"averaging" data across multiple conditions without regard to the boundaries. these methods result in biased...
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
Work-related accidents among the Iranian population: a time series analysis, 2000–2011
Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
Kinesin-microtubule interactions during gliding assays under magnetic force
NASA Astrophysics Data System (ADS)
Fallesen, Todd L.
Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Class III correction using an inter-arch spring-loaded module
2014-01-01
Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA′) in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA′. The Efficacy Ratio adjusts the AMA′ to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA′ is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA′ are superior to the passive buy-and-hold strategy. Specifically, AMA′ outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets. PMID:27574972
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)
1999-01-01
A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.
Wu, Yan; Aarts, Ronald M.
2018-01-01
A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin
2014-01-01
Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882
NASA Astrophysics Data System (ADS)
Meintz, Andrew Lee
This dissertation offers a description of the development of a fuel cell plug-in hybrid electric vehicle focusing on the propulsion architecture selection, propulsion system control, and high-level energy management. Two energy management techniques have been developed and implemented for real-time control of the vehicle. The first method is a heuristic method that relies on a short-term moving average of the vehicle power requirements. The second method utilizes an affine function of the short-term and long-term moving average vehicle power requirements. The development process of these methods has required the creation of a vehicle simulator capable of estimating the effect of changes to the energy management control techniques on the overall vehicle energy efficiency. Furthermore, the simulator has allowed for the refinement of the energy management methods and for the stability of the method to be analyzed prior to on-road testing. This simulator has been verified through on-road testing of a constructed prototype vehicle under both highway and city driving schedules for each energy management method. The results of the finalized vehicle control strategies are compared with the simulator predictions and an assessment of the effectiveness of both strategies is discussed. The methods have been evaluated for energy consumption in the form of both hydrogen fuel and stored electricity from grid charging.
Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications
NASA Technical Reports Server (NTRS)
Biedron, Robert T,; Thomas, James L.
2009-01-01
An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.
Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline
2014-01-01
Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875
Ultra-Short-Term Wind Power Prediction Using a Hybrid Model
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.
Examination of the Armagh Observatory Annual Mean Temperature Record, 1844-2004
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
The long-term annual mean temperature record (1844-2004) of the Armagh Observatory (Armagh, Northern Ireland, United Kingdom) is examined for evidence of systematic variation, in particular, as related to solar/geomagnetic forcing and secular variation. Indeed, both are apparent in the temperature record. Moving averages for 10 years of temperature are found to highly correlate against both 10-year moving averages of the aa-geomagnetic index and sunspot number, having correlation coefficients of approx. 0.7, inferring that nearly half the variance in the 10-year moving average of temperature can be explained by solar/geomagnetic forcing. The residuals appear episodic in nature, with cooling seen in the 1880s and again near 1980. Seven of the last 10 years of the temperature record has exceeded 10 C, unprecedented in the overall record. Variation of sunspot cyclic averages and 2-cycle moving averages of temperature strongly associate with similar averages for the solar/geomagnetic cycle, with the residuals displaying an apparent 9-cycle variation and a steep rise in temperature associated with cycle 23. Hale cycle averages of temperature for even-odd pairs of sunspot cycles correlate against similar averages for the solar/geomagnetic cycle and, especially, against the length of the Hale cycle. Indications are that annual mean temperature will likely exceed 10 C over the next decade.
Inhalant Use among Indiana School Children, 1991-2004
ERIC Educational Resources Information Center
Ding, Kele; Torabi, Mohammad R.; Perera, Bilesha; Jun, Mi Kyung; Jones-McKyer, E. Lisako
2007-01-01
Objective: To examine the prevalence and trend of inhalant use among Indiana public school students. Methods: The Alcohol, Tobacco, and Other Drug Use among Indiana Children and Adolescents surveys conducted annually between 1991 and 2004 were reanalyzed using 2-way moving average, Poisson regression, and ANOVA tests. Results: The prevalence had…
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yoon, Eun-A.; Hwang, Doo-Jin; Chae, Jinho; Yoon, Won Duk; Lee, Kyounghoon
2018-03-01
This study was carried out to determine the in situ target strength and behavioral characteristics of moon jellyfish ( Aurelia aurita) using two frequencies (38 and 120 kHz) that present a 2- frequency-difference method for distinguishing A. aurita from other marine planktonic organisms. The average TS was shown as -71.9 -67.9 dB at 38 kHz and -75.5 -66.0 dB at 120 kHz and the average ΔMVBS120-38 kHz was similar at -1.5 3.5 dB. The TS values varied in a range of about 14 dB from -83.3 and -69.0 dB depending on the pulsation of A. aurita. The species moved in a range of -0.1 1.0 m and they mostly moved horizontally with moving speeds of 0.3 0.6 m·s-1. The TS and behavioral characteristics of A. aurita can distinguish the species from others. The acoustic technology can also contribute to understanding the distribution and abundance of the species.
Collell, Guillem; Prelec, Drazen; Patil, Kaustubh R
2018-01-31
Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori , i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
Tani, Yuji; Ogasawara, Katsuhiko
2012-01-01
This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovy, Jo; Hogg, David W., E-mail: jo.bovy@nyu.ed
2010-07-10
The velocity distribution of nearby stars ({approx}<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymousmore » open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.« less
Isolating and moving single atoms using silicon nanocrystals
Carroll, Malcolm S.
2010-09-07
A method is disclosed for isolating single atoms of an atomic species of interest by locating the atoms within silicon nanocrystals. This can be done by implanting, on the average, a single atom of the atomic species of interest into each nanocrystal, and then measuring an electrical charge distribution on the nanocrystals with scanning capacitance microscopy (SCM) or electrostatic force microscopy (EFM) to identify and select those nanocrystals having exactly one atom of the atomic species of interest therein. The nanocrystals with the single atom of the atomic species of interest therein can be sorted and moved using an atomic force microscope (AFM) tip. The method is useful for forming nanoscale electronic and optical devices including quantum computers and single-photon light sources.
An efficient estimator to monitor rapidly changing forest conditions
Raymond L. Czaplewski; Michael T. Thompson; Gretchen G. Moisen
2012-01-01
Extensive expanses of forest often change at a slow pace. In this common situation, FIA produces informative estimates of current status with the Moving Average (MA) method and post-stratification with a remotely sensed map of forest-nonforest cover. However, MA "smoothes out" estimates over time, which confounds analyses of temporal trends; and post-...
Opportunities to improve monitoring of temporal trends with FIA panel data
Raymond Czaplewski; Michael Thompson
2009-01-01
The Forest Inventory and Analysis (FIA) Program of the Forest Service, Department of Agriculture, is an annual monitoring system for the entire United States. Each year, an independent "panel" of FIA field plots is measured. To improve accuracy, FIA uses the "Moving Average" or "Temporally Indifferent" method to combine estimates from...
The Prediction of Teacher Turnover Employing Time Series Analysis.
ERIC Educational Resources Information Center
Costa, Crist H.
The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…
A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei
2016-03-01
Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Rockhill, J; Phillips, M
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Consistent and efficient processing of ADCP streamflow measurements
Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, W; Jiang, M; Yin, F
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less
Direct determination approach for the multifractal detrending moving average analysis
NASA Astrophysics Data System (ADS)
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
Optimized nested Markov chain Monte Carlo sampling: theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D
2009-01-01
Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less
NASA Astrophysics Data System (ADS)
Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia
2016-11-01
Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.
2010-01-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503
Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W
2009-10-01
Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Concept for an off-line gain stabilisation method.
Pommé, S; Sibbens, G
2004-01-01
Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.
Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.
Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin
2016-10-01
Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.
Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2014-01-01
Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
...: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Proposed rule. SUMMARY: This proposed rule..., especially the teaching status adjustment factor. Therefore, we implemented a 3-year moving average approach... moving average to calculate the facility-level adjustment factors. For FY 2011, we issued a notice to...
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
A Maze Game on Android Using Growing Tree Method
NASA Astrophysics Data System (ADS)
Hendrawan, Y. F.
2018-01-01
A maze is a type of puzzle games where a player moves in complex and branched passages to find a particular target or location. One method to create a maze is the Growing Tree method. The method creates a tree that has branches which are the paths of a maze. This research explored three types of Growing Tree method implementations for maze generation on Android mobile devices. The layouts produced could be played in first and third-person perspectives. The experiment results showed that it took 17.3 seconds on average to generate 20 cells x 20 cells dynamic maze layouts.
Moving in the Right Direction: Helping Children Cope with a Relocation
ERIC Educational Resources Information Center
Kruse, Tricia
2012-01-01
According to national figures, 37.1 million people moved in 2009 (U.S. Census Bureau, 2010). In fact, the average American will move 11.7 times in their lifetime. Why are Americans moving so much? There are a variety of reasons. Regardless of the reason, moving is a common experience for children. If one looks at the developmental characteristics…
Method and apparatus for ultrasonic characterization through the thickness direction of a moving web
Jackson, Theodore; Hall, Maclin S.
2001-01-01
A method and apparatus for determining the caliper and/or the ultrasonic transit time through the thickness direction of a moving web of material using ultrasonic pulses generated by a rotatable wheel ultrasound apparatus. The apparatus includes a first liquid-filled tire and either a second liquid-filled tire forming a nip or a rotatable cylinder that supports a thin moving web of material such as a moving web of paper and forms a nip with the first liquid-filled tire. The components of ultrasonic transit time through the tires and fluid held within the tires may be resolved and separately employed to determine the separate contributions of the two tire thicknesses and the two fluid paths to the total path length that lies between two ultrasonic transducer surfaces contained within the tires in support of caliper measurements. The present invention provides the benefit of obtaining a transit time and caliper measurement at any point in time as a specimen passes through the nip of rotating tires and eliminates inaccuracies arising from nonuniform tire circumferential thickness by accurately retaining point-to-point specimen transit time and caliper variation information, rather than an average obtained through one or more tire rotations. Morever, ultrasonic transit time through the thickness direction of a moving web may be determined independent of small variations in the wheel axle spacing, tire thickness, and liquid and tire temperatures.
NASA Astrophysics Data System (ADS)
Gligor, M.; Ausloos, M.
2007-05-01
The statistical distances between countries, calculated for various moving average time windows, are mapped into the ultrametric subdominant space as in classical Minimal Spanning Tree methods. The Moving Average Minimal Length Path (MAMLP) algorithm allows a decoupling of fluctuations with respect to the mass center of the system from the movement of the mass center itself. A Hamiltonian representation given by a factor graph is used and plays the role of cost function. The present analysis pertains to 11 macroeconomic (ME) indicators, namely the GDP (x1), Final Consumption Expenditure (x2), Gross Capital Formation (x3), Net Exports (x4), Consumer Price Index (y1), Rates of Interest of the Central Banks (y2), Labour Force (z1), Unemployment (z2), GDP/hour worked (z3), GDP/capita (w1) and Gini coefficient (w2). The target group of countries is composed of 15 EU countries, data taken between 1995 and 2004. By two different methods (the Bipartite Factor Graph Analysis and the Correlation Matrix Eigensystem Analysis) it is found that the strongly correlated countries with respect to the macroeconomic indicators fluctuations can be partitioned into stable clusters.
A sharp interface Cartesian grid method for viscous simulation of shocked particle-laden flows
NASA Astrophysics Data System (ADS)
Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.
2017-09-01
A Cartesian grid-based sharp interface method is presented for viscous simulations of shocked particle-laden flows. The moving solid-fluid interfaces are represented using level sets. A moving least-squares reconstruction is developed to apply the no-slip boundary condition at solid-fluid interfaces and to supply viscous stresses to the fluid. The algorithms developed in this paper are benchmarked against similarity solutions for the boundary layer over a fixed flat plate and against numerical solutions for moving interface problems such as shock-induced lift-off of a cylinder in a channel. The framework is extended to 3D and applied to calculate low Reynolds number steady supersonic flow over a sphere. Viscous simulation of the interaction of a particle cloud with an incident planar shock is demonstrated; the average drag on the particles and the vorticity field in the cloud are compared to the inviscid case to elucidate the effects of viscosity on momentum transfer between the particle and fluid phases. The methods developed will be useful for obtaining accurate momentum and heat transfer closure models for macro-scale shocked particulate flow applications such as blast waves and dust explosions.
NASA Astrophysics Data System (ADS)
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Application of image processing to calculate the number of fish seeds using raspberry-pi
NASA Astrophysics Data System (ADS)
Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.
2018-03-01
Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.
2016-11-01
Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.
Kokki, Tommi; Sipilä, Hannu T; Teräs, Mika; Noponen, Tommi; Durand-Schaefer, Nicolas; Klén, Riku; Knuuti, Juhani
2010-01-01
In PET imaging respiratory and cardiac contraction motions interfere the imaging of heart. The aim was to develop and evaluate dual gating method for improving the detection of small targets of the heart. The method utilizes two independent triggers which are sent periodically into list mode data based on respiratory and ECG cycles. An algorithm for generating dual gated segments from list mode data was developed. The test measurements showed that rotational and axial movements of point source can be separated spatially to different segments with well-defined borders. The effect of dual gating on detection of small moving targets was tested with a moving heart phantom. Dual gated images showed 51% elimination (3.6 mm out of 7.0 mm) of contraction motion of hot spot (diameter 3 mm) and 70% elimination (14 mm out of 20 mm) of respiratory motion. Averaged activity value of hot spot increases by 89% when comparing to non-gated images. Patient study of suspected cardiac sarcoidosis shows sharper spatial myocardial uptake profile and improved detection of small myocardial structures such as papillary muscles. The dual gating method improves detection of small moving targets in a phantom and it is feasible in clinical situations.
Weather explains high annual variation in butterfly dispersal
Rytteri, Susu; Heikkinen, Risto K.; Heliölä, Janne; von Bagh, Peter
2016-01-01
Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark–release–recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79–91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. PMID:27440662
Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow
NASA Astrophysics Data System (ADS)
Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen
2015-11-01
The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter
NASA Astrophysics Data System (ADS)
Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu
2017-05-01
Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.
Analysis Monthly Import of Palm Oil Products Using Box-Jenkins Model
NASA Astrophysics Data System (ADS)
Ahmad, Nurul F. Y.; Khalid, Kamil; Saifullah Rusiman, Mohd; Ghazali Kamardan, M.; Roslan, Rozaini; Che-Him, Norziha
2018-04-01
The palm oil industry has been an important component of the national economy especially the agriculture sector. The aim of this study is to identify the pattern of import of palm oil products, to model the time series using Box-Jenkins model and to forecast the monthly import of palm oil products. The method approach is included in the statistical test for verifying the equivalence model and statistical measurement of three models, namely Autoregressive (AR) model, Moving Average (MA) model and Autoregressive Moving Average (ARMA) model. The model identification of all product import palm oil is different in which the AR(1) was found to be the best model for product import palm oil while MA(3) was found to be the best model for products import palm kernel oil. For the palm kernel, MA(4) was found to be the best model. The results forecast for the next four months for products import palm oil, palm kernel oil and palm kernel showed the most significant decrease compared to the actual data.
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process
ERIC Educational Resources Information Center
Murphy, Daniel L.; Pituch, Keenan A.
2009-01-01
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
Using Baidu Search Index to Predict Dengue Outbreak in China
NASA Astrophysics Data System (ADS)
Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai
2016-12-01
This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.
Driving-forces model on individual behavior in scenarios considering moving threat agents
NASA Astrophysics Data System (ADS)
Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia
2017-09-01
The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Relative distance between tracers as a measure of diffusivity within moving aggregates
NASA Astrophysics Data System (ADS)
Pönisch, Wolfram; Zaburdaev, Vasily
2018-02-01
Tracking of particles, be it a passive tracer or an actively moving bacterium in the growing bacterial colony, is a powerful technique to probe the physical properties of the environment of the particles. One of the most common measures of particle motion driven by fluctuations and random forces is its diffusivity, which is routinely obtained by measuring the mean squared displacement of the particles. However, often the tracer particles may be moving in a domain or an aggregate which itself experiences some regular or random motion and thus masks the diffusivity of tracers. Here we provide a method for assessing the diffusivity of tracer particles within mobile aggregates by measuring the so-called mean squared relative distance (MSRD) between two tracers. We provide analytical expressions for both the ensemble and time averaged MSRD allowing for direct identification of diffusivities from experimental data.
Anderson, Kimberly R.; Anthony, T. Renée
2014-01-01
An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1–0.4 m s−1). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L
NASA Astrophysics Data System (ADS)
Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.
2018-03-01
This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.
ECG artifact cancellation in surface EMG signals by fractional order calculus application.
Miljković, Nadica; Popović, Nenad; Djordjević, Olivera; Konstantinović, Ljubica; Šekara, Tomislav B
2017-03-01
New aspects for automatic electrocardiography artifact removal from surface electromyography signals by application of fractional order calculus in combination with linear and nonlinear moving window filters are explored. Surface electromyography recordings of skeletal trunk muscles are commonly contaminated with spike shaped artifacts. This artifact originates from electrical heart activity, recorded by electrocardiography, commonly present in the surface electromyography signals recorded in heart proximity. For appropriate assessment of neuromuscular changes by means of surface electromyography, application of a proper filtering technique of electrocardiography artifact is crucial. A novel method for automatic artifact cancellation in surface electromyography signals by applying fractional order calculus and nonlinear median filter is introduced. The proposed method is compared with the linear moving average filter, with and without prior application of fractional order calculus. 3D graphs for assessment of window lengths of the filters, crest factors, root mean square differences, and fractional calculus orders (called WFC and WRC graphs) have been introduced. For an appropriate quantitative filtering evaluation, the synthetic electrocardiography signal and analogous semi-synthetic dataset have been generated. The examples of noise removal in 10 able-bodied subjects and in one patient with muscle dystrophy are presented for qualitative analysis. The crest factors, correlation coefficients, and root mean square differences of the recorded and semi-synthetic electromyography datasets showed that the most successful method was the median filter in combination with fractional order calculus of the order 0.9. Statistically more significant (p < 0.001) ECG peak reduction was obtained by the median filter application compared to the moving average filter in the cases of low level amplitude of muscle contraction compared to ECG spikes. The presented results suggest that the novel method combining a median filter and fractional order calculus can be used for automatic filtering of electrocardiography artifacts in the surface electromyography signal envelopes recorded in trunk muscles. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Park, Yoonah; Yong, Yuen Geng; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-01-01
Purpose This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). Methods This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Results Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). Conclusion The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase. PMID:25960990
Finding the average speed of a light-emitting toy car with a smartphone light sensor
NASA Astrophysics Data System (ADS)
Kapucu, Serkan
2017-07-01
This study aims to demonstrate how the average speed of a light-emitting toy car may be determined using a smartphone’s light sensor. The freely available Android smartphone application, ‘AndroSensor’, was used for the experiment. The classroom experiment combines complementary physics knowledge of optics and kinematics to find the average speed of a moving object. The speed of the toy car is found by determining the distance between the light-emitting toy car and the smartphone, and the time taken to travel these distances. To ensure that the average speed of the toy car calculated with the help of the AndroSensor was correct, the average speed was also calculated by analyzing video-recordings of the toy car. The resulting speeds found with these different methods were in good agreement with each other. Hence, it can be concluded that reliable measurements of the average speed of light-emitting objects can be determined with the help of the light sensor of an Android smartphone.
Motion tracing system for ultrasound guided HIFU
NASA Astrophysics Data System (ADS)
Xiao, Xu; Jiang, Tingyi; Corner, George; Huang, Zhihong
2017-03-01
One main limitation in HIFU treatment is the abdominal movement in liver and kidney caused by respiration. The study has set up a tracking model which mainly compromises of a target carrying box and a motion driving balloon. A real-time B-mode ultrasound guidance method suitable for tracking of the abdominal organ motion in 2D was established and tested. For the setup, the phantoms mimicking moving organs are carefully prepared with agar surrounding round-shaped egg-white as the target of focused ultrasound ablation. Physiological phantoms and animal tissues are driven moving reciprocally along the main axial direction of the ultrasound image probe with slightly motion perpendicular to the axial direction. The moving speed and range could be adjusted by controlling the inflation and deflation speed and amount of the balloon driven by a medical ventilator. A 6-DOF robotic arm was used to position the focused ultrasound transducer. The overall system was trying to estimate to simulate the actual movement caused by human respiration. HIFU ablation experiments using phantoms and animal organs were conducted to test the tracking effect. Ultrasound strain elastography was used to post estimate the efficiency of the tracking algorithms and system. In moving state, the axial size of the lesion (perpendicular to the movement direction) are averagely 4mm, which is one third larger than the lesion got when the target was not moving. This presents the possibility of developing a low-cost real-time method of tracking organ motion during HIFU treatment in liver or kidney.
Ambient temperature and biomarkers of heart failure: a repeated measures analysis.
Wilker, Elissa H; Yeh, Gloria; Wellenius, Gregory A; Davis, Roger B; Phillips, Russell S; Mittleman, Murray A
2012-08-01
Extreme temperatures have been associated with hospitalization and death among individuals with heart failure, but few studies have explored the underlying mechanisms. We hypothesized that outdoor temperature in the Boston, Massachusetts, area (1- to 4-day moving averages) would be associated with higher levels of biomarkers of inflammation and myocyte injury in a repeated-measures study of individuals with stable heart failure. We analyzed data from a completed clinical trial that randomized 100 patients to 12 weeks of tai chi classes or to time-matched education control. B-type natriuretic peptide (BNP), C-reactive protein (CRP), and tumor necrosis factor (TNF) were measured at baseline, 6 weeks, and 12 weeks. Endothelin-1 was measured at baseline and 12 weeks. We used fixed effects models to evaluate associations with measures of temperature that were adjusted for time-varying covariates. Higher apparent temperature was associated with higher levels of BNP beginning with 2-day moving averages and reached statistical significance for 3- and 4-day moving averages. CRP results followed a similar pattern but were delayed by 1 day. A 5°C change in 3- and 4-day moving averages of apparent temperature was associated with 11.3% [95% confidence interval (CI): 1.1, 22.5; p = 0.03) and 11.4% (95% CI: 1.2, 22.5; p = 0.03) higher BNP. A 5°C change in the 4-day moving average of apparent temperature was associated with 21.6% (95% CI: 2.5, 44.2; p = 0.03) higher CRP. No clear associations with TNF or endothelin-1 were observed. Among patients undergoing treatment for heart failure, we observed positive associations between temperature and both BNP and CRP-predictors of heart failure prognosis and severity.
NASA Technical Reports Server (NTRS)
Pongratz, M.
1972-01-01
Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.
Verrier, Richard L.; Klingenheben, Thomas; Malik, Marek; El-Sherif, Nabil; Exner, Derek V.; Hohnloser, Stefan H.; Ikeda, Takanori; Martínez, Juan Pablo; Narayan, Sanjiv M.; Nieminen, Tuomo; Rosenbaum, David S.
2014-01-01
This consensus guideline was prepared on behalf of the International Society for Holter and Noninvasive Electrocardiology and is cosponsored by the Japanese Circulation Society, the Computers in Cardiology Working Group on e-Cardiology of the European Society of Cardiology, and the European Cardiac Arrhythmia Society. It discusses the electrocardiographic phenomenon of T-wave alternans (TWA) (i.e., a beat-to-beat alternation in the morphology and amplitude of the ST- segment or T-wave). This statement focuses on its physiological basis and measurement technologies and its clinical utility in stratifying risk for life-threatening ventricular arrhythmias. Signal processing techniques including the frequency-domain Spectral Method and the time-domain Modified Moving Average method have demonstrated the utility of TWA in arrhythmia risk stratification in prospective studies in >12,000 patients. The majority of exercise-based studies using both methods have reported high relative risks for cardiovascular mortality and for sudden cardiac death in patients with preserved as well as depressed left ventricular ejection fraction. Studies with ambulatory electrocardiogram-based TWA analysis with Modified Moving Average method have yielded significant predictive capacity. However, negative studies with the Spectral Method have also appeared, including 2 interventional studies in patients with implantable defibrillators. Meta-analyses have been performed to gain insights into this issue. Frontiers of TWA research include use in arrhythmia risk stratification of individuals with preserved ejection fraction, improvements in predictivity with quantitative analysis, and utility in guiding medical as well as device-based therapy. Overall, although TWA appears to be a useful marker of risk for arrhythmic and cardiovascular death, there is as yet no definitive evidence that it can guide therapy. PMID:21920259
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Self-adjusting grid methods for one-dimensional hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Harten, A.; Hyman, J. M.
1983-01-01
The automatic adjustment of a grid which follows the dynamics of the numerical solution of hyperbolic conservation laws is given. The grid motion is determined by averaging the local characteristic velocities of the equations with respect to the amplitudes of the signals. The resulting algorithm is a simple extension of many currently popular Godunov-type methods. Computer codes using one of these methods can be easily modified to add the moving mesh as an option. Numerical examples are given that illustrate the improved accuracy of Godunov's and Roe's methods on a self-adjusting mesh. Previously announced in STAR as N83-15008
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682
Neural net forecasting for geomagnetic activity
NASA Technical Reports Server (NTRS)
Hernandez, J. V.; Tajima, T.; Horton, W.
1993-01-01
We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).
Queues with Choice via Delay Differential Equations
NASA Astrophysics Data System (ADS)
Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth
Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.
Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.
Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat
2014-01-01
The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.
Regional Variations in Diagnostic Practices
Song, Yunjie; Skinner, Jonathan; Bynum, Julie; Sutherland, Jason; Wennberg, John E.; Fisher, Elliott S.
2010-01-01
BACKGROUND Current methods of risk adjustment rely on diagnoses recorded in clinical and administrative records. Differences among providers in diagnostic practices could lead to bias. METHODS We used Medicare claims data from 1999 through 2006 to measure trends in diagnostic practices for Medicare beneficiaries. Regions were grouped into five quintiles according to the intensity of hospital and physician services that beneficiaries in the region received. We compared trends with respect to diagnoses, laboratory testing, imaging, and the assignment of Hierarchical Condition Categories (HCCs) among beneficiaries who moved to regions with a higher or lower intensity of practice. RESULTS Beneficiaries within each quintile who moved during the study period to regions with a higher or lower intensity of practice had similar numbers of diagnoses and similar HCC risk scores (as derived from HCC coding algorithms) before their move. The number of diagnoses and the HCC measures increased as the cohort aged, but they increased to a greater extent among beneficiaries who moved to regions with a higher intensity of practice than among those who moved to regions with the same or lower intensity of practice. For example, among beneficiaries who lived initially in regions in the lowest quintile, there was a greater increase in the average number of diagnoses among those who moved to regions in a higher quintile than among those who moved to regions within the lowest quintile (increase of 100.8%; 95% confidence interval [CI], 89.6 to 112.1; vs. increase of 61.7%; 95% CI, 55.8 to 67.4). Moving to each higher quintile of intensity was associated with an additional 5.9% increase (95% CI, 5.2 to 6.7) in HCC scores, and results were similar with respect to laboratory testing and imaging. CONCLUSIONS Substantial differences in diagnostic practices that are unlikely to be related to patient characteristics are observed across U.S. regions. The use of clinical or claims-based diagnoses in risk adjustment may introduce important biases in comparative-effectiveness studies, public reporting, and payment reforms. PMID:20463332
Weather explains high annual variation in butterfly dispersal.
Kuussaari, Mikko; Rytteri, Susu; Heikkinen, Risto K; Heliölä, Janne; von Bagh, Peter
2016-07-27
Weather conditions fundamentally affect the activity of short-lived insects. Annual variation in weather is therefore likely to be an important determinant of their between-year variation in dispersal, but conclusive empirical studies are lacking. We studied whether the annual variation of dispersal can be explained by the flight season's weather conditions in a Clouded Apollo (Parnassius mnemosyne) metapopulation. This metapopulation was monitored using the mark-release-recapture method for 12 years. Dispersal was quantified for each monitoring year using three complementary measures: emigration rate (fraction of individuals moving between habitat patches), average residence time in the natal patch, and average distance moved. There was much variation both in dispersal and average weather conditions among the years. Weather variables significantly affected the three measures of dispersal and together with adjusting variables explained 79-91% of the variation observed in dispersal. Different weather variables became selected in the models explaining variation in three dispersal measures apparently because of the notable intercorrelations. In general, dispersal rate increased with increasing temperature, solar radiation, proportion of especially warm days, and butterfly density, and decreased with increasing cloudiness, rainfall, and wind speed. These results help to understand and model annually varying dispersal dynamics of species affected by global warming. © 2016 The Author(s).
Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area
NASA Astrophysics Data System (ADS)
Alarifi, A. S.; Alarifi, N. S.
2009-12-01
Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis of earthquakes data in northern Red Sea area for different statistics parameters such as correlation, mean, standard deviation, and other. This analysis is to provide a deep understand of the Seismicity of the area, and existing patterns.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
MARD—A moving average rose diagram application for the geosciences
NASA Astrophysics Data System (ADS)
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Moving force identification based on modified preconditioned conjugate gradient method
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy
2018-06-01
This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
Kakran, M; Bala, M; Singh, V
2015-01-01
A statistical assessment of a disease is often necessary before resources can be allocated to any control programme. No literature on seasonal trends of gonorrhoea is available from India. The objectives were (1) to determine, if any, seasonal trends were present in India (2) to describe factors contributing to seasonality of gonorrhoea (3) to formulate approaches for gonorrhoea control at the national level. Seasonal indices for gonorrhoea were calculated quarterly in terms of a seasonal index between 2005 and 2010. Ratio-to-moving average method was used to determine the seasonal variation. The original data values in the time-series were expressed as percentages of moving averages. Results were also analyzed by second statistical method i.e. seasonal subseries plot. The seasonally adjusted average for culture-positive gonorrhoea cases was highest in the second quarter (128.61%) followed by third quarter (108.48%) while a trough was observed in the first (96.05%) and last quarter (64.85%). The second quarter peak was representative of summer vacations in schools and colleges. Moreover, April is the harvesting month followed by celebrations and social gatherings. Both these factors are associated with increased sexual activity and partner change. A trough in first and last quarter was indicative of festival season and winter leading to less patients reporting to the hospital. The findings highlight the immediate need to strengthen sexual health education among young people in schools and colleges and education on risk-reduction practices especially at crucial points in the calendar year for effective gonorrhoea control.
NASA Astrophysics Data System (ADS)
Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You
2017-02-01
Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
Models for short term malaria prediction in Sri Lanka
Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H
2008-01-01
Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204
Modeling the effects of high-G stress on pilots in a tracking task
NASA Technical Reports Server (NTRS)
Korn, J.; Kleinman, D. L.
1978-01-01
Air-to-air tracking experiments were conducted at the Aerospace Medical Research Laboratories using both fixed and moving base dynamic environment simulators. The obtained data, which includes longitudinal error of a simulated air-to-air tracking task as well as other auxiliary variables, was analyzed using an ensemble averaging method. In conjunction with these experiments, the optimal control model is applied to model a human operator under high-G stress.
A spline-based non-linear diffeomorphism for multimodal prostate registration.
Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice
2012-08-01
This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Dotsinsky, Ivan
2005-01-01
Background Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Method Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. Results The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. Conclusion The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression. PMID:16309558
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Forecasting Instability Indicators in the Horn of Africa
2008-03-01
further than 2 (Makridakis, et al, 1983, 359). 2-32 Autoregressive Integrated Moving Average ( ARIMA ) Model . Similar to the ARMA model except for...stationary process. ARIMA models are described as ARIMA (p,d,q), where p is the order of the autoregressive process, d is the degree of the...differential process, and q is the order of the moving average process. The ARMA (1,1) model shown above is equivalent to an ARIMA (1,0,1) model . An ARIMA
Decadal Trends of Atlantic Basin Tropical Cyclones (1950-1999)
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Ten-year moving averages of the seasonal rates for 'named storms,' tropical storms, hurricanes, and major (or intense) hurricanes in the Atlantic basin suggest that the present epoch is one of enhanced activity, marked by seasonal rates typically equal to or above respective long-term median rates. As an example, the 10-year moving average of the seasonal rates for named storms is now higher than for any previous year over the past 50 years, measuring 10.65 in 1994, or 2.65 units higher than its median rate of 8. Also, the 10-year moving average for tropical storms has more than doubled, from 2.15 in 1955 to 4.60 in 1992, with 16 of the past 20 years having a seasonal rate of three or more (the median rate). For hurricanes and major hurricanes, their respective 10-year moving averages turned upward, rising above long-term median rates (5.5 and 2, respectively) in 1992, a response to the abrupt increase in seasonal rates that occurred in 1995. Taken together, the outlook for future hurricane seasons is for all categories of Atlantic basin tropical cyclones to have seasonal rates at levels equal to or above long-term median rates, especially during non-El Nino-related seasons. Only during El Nino-related seasons does it appear likely that seasonal rates might be slightly diminished.
The application of time series models to cloud field morphology analysis
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Kocur, Dušan; Švecová, Mária; Rovňáková, Jana
2013-01-01
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968
Kocur, Dušan; Svecová, Mária; Rovňáková, Jana
2013-09-09
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.
Mueller, David S.; Rehmel, Mike S.; Wagner, Chad R.
2007-01-01
In 2003, Teledyne RD Instruments introduced the StreamPro acoustic Doppler current profiler which does not include an internal compass. During stationary moving-bed tests the StreamPro often tends to swim or kite from the end of the tether (the instrument rotates then moves laterally in the direction of the rotation). Because the StreamPro does not have an internal compass, it cannot account for the rotation. This rotation and lateral movement of the StreamPro on the end of the tether generates a false upstream velocity, which cannot be easily distinguished from a moving-bed bias velocity. A field test was completed to demonstrate that this rotation and lateral movement causes a false upstream boat velocity. The vector dot product of the boat velocity and the unit vector of the depth-averaged water velocity is shown to be an effective method to account for the effect of the rotation and lateral movement.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
On-line algorithms for forecasting hourly loads of an electric utility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vemuri, S.; Huang, W.L.; Nelson, D.J.
A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.
2010-03-01
We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.
Transport of the moving barrier driven by chiral active particles
NASA Astrophysics Data System (ADS)
Liao, Jing-jing; Huang, Xiao-qun; Ai, Bao-quan
2018-03-01
Transport of a moving V-shaped barrier exposed to a bath of chiral active particles is investigated in a two-dimensional channel. Due to the chirality of active particles and the transversal asymmetry of the barrier position, active particles can power and steer the directed transport of the barrier in the longitudinal direction. The transport of the barrier is determined by the chirality of active particles. The moving barrier and active particles move in the opposite directions. The average velocity of the barrier is much larger than that of active particles. There exist optimal parameters (the chirality, the self-propulsion speed, the packing fraction, and the channel width) at which the average velocity of the barrier takes its maximal value. In particular, tailoring the geometry of the barrier and the active concentration provides novel strategies to control the transport properties of micro-objects or cargoes in an active medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coruh, M; Ewell, L; Demez, N
Purpose: To estimate the dose delivered to a moving lung tumor by proton therapy beams of different modulation types, and compare with Monte Carlo predictions. Methods: A radiology support devices (RSD) phantom was irradiated with therapeutic proton radiation beams using two different types of modulation: uniform scanning (US) and double scattered (DS). The Eclipse© dose plan was designed to deliver 1.00Gy to the isocenter of a static ∼3×3×3cm (27cc) tumor in the phantom with 100% coverage. The peak to peak amplitude of tumor motion varied from 0.0 to 2.5cm. The radiation dose was measured with an ion-chamber (CC-13) located withinmore » the tumor. The time required to deliver the radiation dose varied from an average of 65s for the DS beams to an average of 95s for the US beams. Results: The amount of radiation dose varied from 100% (both US and DS) to the static tumor down to approximately 92% for the moving tumor. The ratio of US dose to DS dose ranged from approximately 1.01 for the static tumor, down to 0.99 for the 2.5cm moving tumor. A Monte Carlo simulation using TOPAS included a lung tumor with 4.0cm of peak to peak motion. In this simulation, the dose received by the tumor varied by ∼40% as the period of this motion varied from 1s to 4s. Conclusion: The radiation dose deposited to a moving tumor was less than for a static tumor, as expected. At large (2.5cm) amplitudes, the DS proton beams gave a dose closer to the desired dose than the US beams, but equal within experimental uncertainty. TOPAS Monte Carlo simulation can give insight into the moving tumor — dose relationship. This work was supported in part by the Philips corporation.« less
Zhang, Chu; Liu, Fei; He, Yong
2018-02-01
Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.
Out-of-plane ultrasonic velocity measurement
Hall, M.S.; Brodeur, P.H.; Jackson, T.G.
1998-07-14
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated. 20 figs.
Out-of-plane ultrasonic velocity measurement
Hall, Maclin S.; Brodeur, Pierre H.; Jackson, Theodore G.
1998-01-01
A method for improving the accuracy of measuring the velocity and time of flight of ultrasonic signals through moving web-like materials such as paper, paperboard and the like, includes a pair of ultrasonic transducers disposed on opposing sides of a moving web-like material. In order to provide acoustical coupling between the transducers and the web-like material, the transducers are disposed in fluid-filled wheels. Errors due to variances in the wheel thicknesses about their circumference which can affect time of flight measurements and ultimately the mechanical property being tested are compensated by averaging the ultrasonic signals for a predetermined number of revolutions. The invention further includes a method for compensating for errors resulting from the digitization of the ultrasonic signals. More particularly, the invention includes a method for eliminating errors known as trigger jitter inherent with digitizing oscilloscopes used to digitize the signals for manipulation by a digital computer. In particular, rather than cross-correlate ultrasonic signals taken during different sample periods as is known in the art in order to determine the time of flight of the ultrasonic signal through the moving web, a pulse echo box is provided to enable cross-correlation of predetermined transmitted ultrasonic signals with predetermined reflected ultrasonic or echo signals during the sample period. By cross-correlating ultrasonic signals in the same sample period, the error associated with trigger jitter is eliminated.
Multiview point clouds denoising based on interference elimination
NASA Astrophysics Data System (ADS)
Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu
2018-03-01
Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.
The sensitivity in the IR spectrum of the intact and pathological tissues by laser biophotometry.
Ravariu, Cristian; Bondarciuc, Ala
2014-03-01
In this paper, we use the laser biophotometry for in vivo investigations, searching the most sensitive interactions of the near-infrared spectrum with different tissues. The experimental methods are based on the average reflection coefficient (ARC) measurements. For healthy persons, ARC is the average of five values provided by the biophotometer. The probe is applied on dry skin with minimum pilosity, in five regions: left-right shank, left-right forearm, and epigastrium. For the pathological tissues, the emitting terminal is moved over the suspected area, controlling the reflection coefficient level, till a minimum value occurs, as ARC-Pathological. Then, the probe is moved on the symmetrical healthy region of the body to read the complementary coefficient from intact tissue, ARC-Intact, from the same patient. The experimental results show an ARC range between 67 and 59 mW for intact tissues and a lower range, up to 58-42 mW, for pathological tissues. The method is efficient only in those pathological processes accompanied by variable skin depigmentation, water retention, inflammation, thrombosis, or swelling. Frequently, the ARC ranges are overlapping for some diseases. This induces uncertain diagnosis. Therefore, a statistical algorithm is adopted for a differential diagnosis. The laser biophotometry provides a quantitative biometric parameter, ARC, suitable for fast diagnosis in the internal and emergency medicine. These laser biophotometry measurements are representatives for the Romanian clinical trials.
Granger causality for state-space models
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Seth, Anil K.
2015-04-01
Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.
Payami, Haydeh; Kay, Denise M; Zabetian, Cyrus P; Schellenberg, Gerard D; Factor, Stewart A; McCulloch, Colin C
2010-01-01
Age-related variation in marker frequency can be a confounder in association studies, leading to both false-positive and false-negative findings and subsequently to inconsistent reproducibility. We have developed a simple method, based on a novel extension of moving average plots (MAP), which allows investigators to inspect the frequency data for hidden age-related variations. MAP uses the standard case-control association data and generates a birds-eye view of the frequency distributions across the age spectrum; a picture in which one can see if, how, and when the marker frequencies in cases differ from that in controls. The marker can be specified as an allele, genotype, haplotype, or environmental factor; and age can be age-at-onset, age when subject was last known to be unaffected, or duration of exposure. Signature patterns that emerge can help distinguish true disease associations from spurious associations due to age effects, age-varying associations from associations that are uniform across all ages, and associations with risk from associations with age-at-onset. Utility of MAP is illustrated by application to genetic and epidemiological association data for Alzheimer's and Parkinson's disease. MAP is intended as a descriptive method, to complement standard statistical techniques. Although originally developed for age patterns, MAP is equally useful for visualizing any quantitative trait.
Online tracking of instantaneous frequency and amplitude of dynamical system response
NASA Astrophysics Data System (ADS)
Frank Pai, P.
2010-05-01
This paper presents a sliding-window tracking (SWT) method for accurate tracking of the instantaneous frequency and amplitude of arbitrary dynamic response by processing only three (or more) most recent data points. Teager-Kaiser algorithm (TKA) is a well-known four-point method for online tracking of frequency and amplitude. Because finite difference is used in TKA, its accuracy is easily destroyed by measurement and/or signal-processing noise. Moreover, because TKA assumes the processed signal to be a pure harmonic, any moving average in the signal can destroy the accuracy of TKA. On the other hand, because SWT uses a constant and a pair of windowed regular harmonics to fit the data and estimate the instantaneous frequency and amplitude, the influence of any moving average is eliminated. Moreover, noise filtering is an implicit capability of SWT when more than three data points are used, and this capability increases with the number of processed data points. To compare the accuracy of SWT and TKA, Hilbert-Huang transform is used to extract accurate time-varying frequencies and amplitudes by processing the whole data set without assuming the signal to be harmonic. Frequency and amplitude trackings of different amplitude- and frequency-modulated signals, vibrato in music, and nonlinear stationary and non-stationary dynamic signals are studied. Results show that SWT is more accurate, robust, and versatile than TKA for online tracking of frequency and amplitude.
ERIC Educational Resources Information Center
Gaines, Gale F.
Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…
Joint level-set and spatio-temporal motion detection for cell segmentation.
Boukari, Fatima; Makrogiannis, Sokratis
2016-08-10
Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
Spray-coating process in preparing PTFE-PPS composite super-hydrophobic coating
NASA Astrophysics Data System (ADS)
Weng, Rui; Zhang, Haifeng; Liu, Xiaowei
2014-03-01
In order to improve the performance of a liquid-floated rotor micro-gyroscope, the resistance of the moving interface between the rotor and the floating liquid must be reduced. Hydrophobic treatment can reduce the frictional resistance between such interfaces, therefore we proposed a method to prepare a poly-tetrafluoroethylene (PTFE)-poly-phenylene sulphide (PPS) composite super-hydrophobic coating, based on a spraying process. This method can quickly prepare a continuous, uniform PTFE-PPS composite super-hydrophobic surface on a 2J85 material. This method can be divided into three steps, namely: pre-treatment; chemical etching; and spraying. The total time for this is around three hours. When the PTFE concentration is 4%, the average contact angle of the hydrophobic coating surface is 158°. If silicon dioxide nanoparticles are added, this can further improve the adhesion and mechanical strength of the super-hydrophobic composite coating. The maximum average contact angle can reach as high as 164° when the mass fraction of PTFE, PPS and silicon dioxide is 1:1:1.
NASA Astrophysics Data System (ADS)
Warren, Aaron R.
2009-11-01
Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.
Numerical Investigation of a Model Scramjet Combustor Using DDES
NASA Astrophysics Data System (ADS)
Shin, Junsu; Sung, Hong-Gye
2017-04-01
Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.
Minor loop dependence of the magnetic forces and stiffness in a PM-HTS levitation system
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Chengshan
2017-12-01
Based upon the method of current vector potential and the critical state model of Bean, the vertical and lateral forces with different sizes of minor loop are simulated in two typical cooling conditions when a rectangular permanent magnet (PM) above a cylindrical high temperature superconductor (HTS) moves vertically and horizontally. The different values of average magnetic stiffness are calculated by various sizes of minor loop changing from 0.1 to 2 mm. The magnetic stiffness with zero traverse is obtained by using the method of linear extrapolation. The simulation results show that the extreme values of forces decrease with increasing size of minor loop. The magnetic hysteresis of the force curves also becomes small as the size of minor loop increases. This means that the vertical and lateral forces are significantly influenced by the size of minor loop because the forces intensely depend on the moving history of the PM. The vertical stiffness at every vertical position when the PM vertically descends to 1 mm is larger than that as the PM vertically ascents to 30 mm. When the PM moves laterally, the lateral stiffness during the PM passing through any horizontal position in the first time almost equal to the value during the PM passing through the same position in the second time in zero-field cooling (ZFC), however, the lateral stiffness in field cooling (FC) and the cross stiffness in ZFC and FC are significantly affected by the moving history of the PM.
Zenk, Shannon N; Tarlov, Elizabeth; Powell, Lisa M; Wing, Coady; Matthews, Stephen A; Slater, Sandy; Gordon, Howard S; Berbaum, Michael; Fitzgibbon, Marian L
2018-03-01
To present the rationale, methods, and cohort characteristics for 2 complementary "big data" studies of residential environment contributions to body weight, metabolic risk, and weight management program participation and effectiveness. Retrospective cohort. Continental United States. A total of 3 261 115 veterans who received Department of Veterans Affairs (VA) health care in 2009 to 2014, including 169 910 weight management program participants and a propensity score-derived comparison group. The VA MOVE! weight management program, an evidence-based lifestyle intervention. Body mass index, metabolic risk measures, and MOVE! participation; residential environmental attributes (eg, food outlet availability and walkability); and MOVE! program characteristics. Descriptive statistics presented on cohort characteristics and environments where they live. Forty-four percent of men and 42.8% of women were obese, whereas 4.9% of men and 9.9% of women engaged in MOVE!. About half of the cohort had at least 1 supermarket within 1 mile of their home, whereas they averaged close to 4 convenience stores (3.6 for men, 3.9 for women) and 8 fast-food restaurants (7.9 for men, 8.2 for women). Forty-one percent of men and 38.6% of women did not have a park, and 35.5% of men and 31.3% of women did not have a commercial fitness facility within 1 mile. Drawing on a large nationwide cohort residing in diverse environments, these studies are poised to significantly inform policy and weight management program design.
Measuring multiple spike train synchrony.
Kreuz, Thomas; Chicharro, Daniel; Andrzejak, Ralph G; Haas, Julie S; Abarbanel, Henry D I
2009-10-15
Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals (ISIs). In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter-free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISI-diversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
NASA Astrophysics Data System (ADS)
Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.
2017-03-01
Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.
O'Leary, D D; Lin, D C; Hughson, R L
1999-09-01
The heart rate component of the arterial baroreflex gain (BRG) was determined with auto-regressive moving-average (ARMA) analysis during each of spontaneous (SB) and random breathing (RB) protocols. Ten healthy subjects completed each breathing pattern on two different days in each of two different body positions, supine (SUP) and head-up tilt (HUT). The R-R interval, systolic arterial pressure (SAP) and instantaneous lung volume were recorded continuously. BRG was estimated from the ARMA impulse response relationship of R-R interval to SAP and from the spontaneous sequence method. The results indicated that both the ARMA and spontaneous sequence methods were reproducible (r = 0.76 and r = 0.85, respectively). As expected, BRG was significantly less in the HUT compared to SUP position for both ARMA (mean +/- SEM; 3.5 +/- 0.3 versus 11.2 +/- 1.4 ms mmHg-1; P < 0.01) and spontaneous sequence analysis (10.3 +/- 0.8 versus 31.5 +/- 2.3 ms mmHg-1; P < 0.001). However, no significant difference was found between BRG during RB and SB protocols for either ARMA (7.9 +/- 1.4 versus 6.7 +/- 0.8 ms mmHg-1; P = 0.27) or spontaneous sequence methods (21.8 +/- 2.7 versus 20.0 +/- 2.1 ms mmHg-1; P = 0.24). BRG was correlated during RB and SB protocols (r = 0.80; P < 0.0001). ARMA and spontaneous BRG estimates were correlated (r = 0.79; P < 0.0001), with spontaneous sequence values being consistently larger (P < 0.0001). In conclusion, we have shown that ARMA-derived BRG values are reproducible and that they can be determined during SB conditions, making the ARMA method appropriate for use in a wider range of patients.
Dynamics of actin-based movement by Rickettsia rickettsii in vero cells.
Heinzen, R A; Grieshaber, S S; Van Kirk, L S; Devin, C J
1999-08-01
Actin-based motility (ABM) is a virulence mechanism exploited by invasive bacterial pathogens in the genera Listeria, Shigella, and Rickettsia. Due to experimental constraints imposed by the lack of genetic tools and their obligate intracellular nature, little is known about rickettsial ABM relative to Listeria and Shigella ABM systems. In this study, we directly compared the dynamics and behavior of ABM of Rickettsia rickettsii and Listeria monocytogenes. A time-lapse video of moving intracellular bacteria was obtained by laser-scanning confocal microscopy of infected Vero cells synthesizing beta-actin coupled to green fluorescent protein (GFP). Analysis of time-lapse images demonstrated that R. rickettsii organisms move through the cell cytoplasm at an average rate of 4.8 +/- 0.6 micrometer/min (mean +/- standard deviation). This speed was 2.5 times slower than that of L. monocytogenes, which moved at an average rate of 12.0 +/- 3.1 micrometers/min. Although rickettsiae moved more slowly, the actin filaments comprising the actin comet tail were significantly more stable, with an average half-life approximately three times that of L. monocytogenes (100.6 +/- 19.2 s versus 33.0 +/- 7.6 s, respectively). The actin tail associated with intracytoplasmic rickettsiae remained stationary in the cytoplasm as the organism moved forward. In contrast, actin tails of rickettsiae trapped within the nucleus displayed dramatic movements. The observed phenotypic differences between the ABM of Listeria and Rickettsia may indicate fundamental differences in the mechanisms of actin recruitment and polymerization.
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Steady motion of a rowing boat
NASA Astrophysics Data System (ADS)
Dosaev, Marat Z.; Klimina, Liubov A.
2018-05-01
A boat with a crank rowing mechanism is considered. The boat is equipped with two rowing oars positioned symmetrically about the symmetry axis of the shell. Oars move synchronously. With the use of numerical-analytical methods it is shown that the dependence of the average speed of the boat on the magnitude of the engine torque at stationary modes of motion is close to the square root function with a certain factor depending on the model parameters. This result agrees with the results of the experiments.
Up-down Asymmetries in Speed Perception
NASA Technical Reports Server (NTRS)
Thompson, Peter; Stone, Leland S.
1997-01-01
We compared speed matches for pairs of stimuli that moved in opposite directions (upward and downward). Stimuli were elliptical patches (2 deg horizontally by 1 deg vertically) of horizontal sinusoidal gratings of spatial. frequency 2 cycles/deg. Two sequential 380 msec reveal presentations were compared. One of each pair of gratings (the standard) moved at 4 Hz (2 deg/sec), the other (the test) moved at a rate determined by a simple up-down staircase. The point of subjectively equal speed was calculated from the average of the last eight reversals. The task was to fixate a central point and to determine which one of the pair appeared to move faster. Eight of 10 observers perceived the upward drifting grating as moving faster than a grating moving downward but otherwise identical. on average (N = 10), when the standard moved downward, it was matched by a test moving upward at 94.7+/-1.7(SE)% of the standard speed, and when the standard moved upward it was matched by a test moving downward at 105.1+/-2.3(SE)% of the standard speed. Extending this paradigm over a range of spatial (1.5 to 13.5 c/d) and temporal (1.5 to 13.5 Hz) frequencies, preliminary results (N = 4) suggest that, under the conditions of our experiment, upward matter is seen as faster than downward for speeds greater than approx.1 deg/sec, but the effect appears to reverse at speeds below approx.1 deg/sec with downward motion perceived as faster. Given that an up-down asymmetry has been observed for the optokinetic response, both perceptual and oculomotor contributions to this phenomenon deserve exploration.
Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2016-01-01
This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed (
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less
Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis
NASA Astrophysics Data System (ADS)
Awais, M.; Saleem, S.; Hayat, T.; Irum, S.
2016-12-01
This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Books Average Previous Decade of Economic Misery
Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
Time series modelling of increased soil temperature anomalies during long period
NASA Astrophysics Data System (ADS)
Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar
2015-10-01
Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.
Monthly streamflow forecasting with auto-regressive integrated moving average
NASA Astrophysics Data System (ADS)
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Lv, Houning; Zhao, Ningning; Zheng, Zhongqing; Wang, Tao; Yang, Fang; Jiang, Xihui; Lin, Lin; Sun, Chao; Wang, Bangmao
2017-05-01
Peroral endoscopic myotomy (POEM) has emerged as an advanced technique for the treatment of achalasia, and defining the learning curve is mandatory. From August 2011 to June 2014, two operators in our institution (A&B) carried out POEM on 35 and 33 consecutive patients, respectively. Moving average and cumulative sum (CUSUM) methods were used to analyze the POEM learning curve for corrected operative time (cOT), referring to duration of per centimeter myotomy. Additionally, perioperative outcomes were compared among distinct learning curve phases. Using the moving average method, cOT reached a plateau at the 29th case and at the 24th case for operators A and B, respectively. CUSUM analysis identified three phases: initial learning period (Phase 1), efficiency period (Phase 2) and mastery period (Phase 3). The relatively smooth state in the CUSUM graph occurred at the 26th case and at the 24th case for operators A and B, respectively. Mean cOT of distinct phases for operator A were 8.32, 5.20 and 3.97 min, whereas they were 5.99, 3.06 and 3.75 min for operator B, respectively. Eckardt score and lower esophageal sphincter pressure significantly decreased during the 1-year follow-up period. Data were comparable regarding patient characteristics and perioperative outcomes. This single-center study demonstrated that expert endoscopists with experience in esophageal endoscopic submucosal dissection reached a plateau in learning of POEM after approximately 25 cases. © 2016 Japan Gastroenterological Endoscopy Society.
Dotsinsky, Ivan
2005-11-26
Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression.
Predicting the Size and Timing of Sunspot Maximum for Cycle 24
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2010-01-01
For cycle 24, the minimum value of the 12-month moving average (12-mma) of the AA-geomagnetic index in the vicinity of sunspot minimum (AAm) appears to have occurred in September 2009, measuring about 8.4 nT and following sunspot minimum by 9 months. This is the lowest value of AAm ever recorded, falling below that of 8.9 nT, previously attributed to cycle 14, which also is the smallest maximum amplitude (RM) cycle of the modern era (RM = 64.2). Based on the method of Ohl (the preferential association between RM and AAm for an ongoing cycle), one expects cycle 24 to have RM = 55+/-17 (the +/-1 - sigma prediction interval). Instead, using a variation of Ohl's method, one based on using 2-cycle moving averages (2-cma), one expects cycle 23's 2-cma of RM to be about 115.5+/-8.7 (the +/-1 - sigma prediction interval), inferring an RM of about 62+/-35 for cycle 24. Hence, it seems clear that cycle 24 will be smaller in size than was seen in cycle 23 (RM = 120.8) and, likely, will be comparable in size to that of cycle 14. From the Waldmeier effect (the preferential association between the ascent duration (ASC) and RM for an ongoing cycle), one expects cycle 24 to be a slow-rising cycle (ASC > or equal to 48 months), having RM occurrence after December 2012, unless it turns out to be a statistical outlier.
A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Dexter, F
2000-10-01
We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.
A 12-Year Analysis of Nonbattle Injury Among US Service Members Deployed to Iraq and Afghanistan.
Le, Tuan D; Gurney, Jennifer M; Nnamani, Nina S; Gross, Kirby R; Chung, Kevin K; Stockinger, Zsolt T; Nessen, Shawn C; Pusateri, Anthony E; Akers, Kevin S
2018-05-30
Nonbattle injury (NBI) among deployed US service members increases the burden on medical systems and results in high rates of attrition, affecting the available force. The possible causes and trends of NBI in the Iraq and Afghanistan wars have, to date, not been comprehensively described. To describe NBI among service members deployed to Iraq and Afghanistan, quantify absolute numbers of NBIs and proportion of NBIs within the Department of Defense Trauma Registry, and document the characteristics of this injury category. In this retrospective cohort study, data from the Department of Defense Trauma Registry on 29 958 service members injured in Iraq and Afghanistan from January 1, 2003, through December 31, 2014, were obtained. Injury incidence, patterns, and severity were characterized by battle injury and NBI. Trends in NBI were modeled using time series analysis with autoregressive integrated moving average and the weighted moving average method. Statistical analysis was performed from January 1, 2003, to December 31, 2014. Primary outcomes were proportion of NBIs and the changes in NBI over time. Among 29 958 casualties (battle injury and NBI) analyzed, 29 003 were in men and 955 were in women; the median age at injury was 24 years (interquartile range, 21-29 years). Nonbattle injury caused 34.1% of total casualties (n = 10 203) and 11.5% of all deaths (206 of 1788). Rates of NBI were higher among women than among men (63.2% [604 of 955] vs 33.1% [9599 of 29 003]; P < .001) and in Operation New Dawn (71.0% [298 of 420]) and Operation Iraqi Freedom (36.3% [6655 of 18 334]) compared with Operation Enduring Freedom (29.0% [3250 of 11 204]) (P < .001). A higher proportion of NBIs occurred in members of the Air Force (66.3% [539 of 810]) and Navy (48.3% [394 of 815]) than in members of the Army (34.7% [7680 of 22 154]) and Marine Corps (25.7% [1584 of 6169]) (P < .001). Leading mechanisms of NBI included falls (2178 [21.3%]), motor vehicle crashes (1921 [18.8%]), machinery or equipment accidents (1283 [12.6%]), blunt objects (1107 [10.8%]), gunshot wounds (728 [7.1%]), and sports (697 [6.8%]), causing predominantly blunt trauma (7080 [69.4%]). The trend in proportion of NBIs did not decrease over time, remaining at approximately 35% (by weighted moving average) after 2006 and approximately 39% by autoregressive integrated moving average. Assuming stable battlefield conditions, the autoregressive integrated moving average model estimated that the proportion of NBIs from 2015 to 2022 would be approximately 41.0% (95% CI, 37.8%-44.3%). In this study, approximately one-third of injuries during the Iraq and Afghanistan wars resulted from NBI, and the proportion of NBIs was steady for 12 years. Understanding the possible causes of NBI during military operations may be useful to target protective measures and safety interventions, thereby conserving fighting strength on the battlefield.
Validation of a pretreatment delivery quality assurance method for the CyberKnife Synchrony system.
Mastella, E; Vigorito, S; Rondi, E; Piperno, G; Ferrari, A; Strata, E; Rozza, D; Jereczek-Fossa, B A; Cattani, F
2016-08-01
To evaluate the geometric and dosimetric accuracies of the CyberKnife Synchrony respiratory tracking system (RTS) and to validate a method for pretreatment patient-specific delivery quality assurance (DQA). An EasyCube phantom was mounted on the ExacTrac gating phantom, which can move along the superior-inferior (SI) axis of a patient to simulate a moving target. The authors compared dynamic and static measurements. For each case, a Gafchromic EBT3 film was positioned between two slabs of the EasyCube, while a PinPoint ionization chamber was placed in the appropriate space. There were three steps to their evaluation: (1) the field size, the penumbra, and the symmetry of six secondary collimators were measured along the two main orthogonal axes. Dynamic measurements with deliberately simulated errors were also taken. (2) The delivered dose distributions (from step 1) were compared with the planned ones, using the gamma analysis method. The local gamma passing rates were evaluated using three acceptance criteria: 3% local dose difference (LDD)/3 mm, 2%LDD/2 mm, and 3%LDD/1 mm. (3) The DQA plans for six clinical patients were irradiated in different dynamic conditions, to give a total of 19 cases. The measured and planned dose distributions were evaluated with the same gamma-index criteria used in step 2 and the measured chamber doses were compared with the planned mean doses in the sensitive volume of the chamber. (1) A very slight enlargement of the field size and of the penumbra was observed in the SI direction (on average <1 mm), in line with the overall average CyberKnife system error for tracking treatments. (2) Comparison between the planned and the correctly delivered dose distributions confirmed the dosimetric accuracy of the RTS for simple plans. The multicriteria gamma analysis was able to detect the simulated errors, proving the robustness of their method of analysis. (3) All of the DQA clinical plans passed the tests, both in static and dynamic conditions. No statistically significant differences were found between static and dynamic cases, confirming the high degree of accuracy of the Synchrony RTS. The presented methods and measurements verified the mechanical and dosimetric accuracy of the Synchrony RTS. Their method confirms the fact that the RTS, if used properly, is able to treat a moving target with great precision. By combining PinPoint ion chamber, EBT3 films, and gamma evaluation of dose distributions, their DQA method robustly validated the effectiveness of CyberKnife and Synchrony system.
Peak Running Intensity of International Rugby: Implications for Training Prescription.
Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M
2017-09-01
To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.
Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan
2018-05-02
The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2008-01-01
Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.
Yeo, Boon Y.; McLaughlin, Robert A.; Kirk, Rodney W.; Sampson, David D.
2012-01-01
We present a high-resolution three-dimensional position tracking method that allows an optical coherence tomography (OCT) needle probe to be scanned laterally by hand, providing the high degree of flexibility and freedom required in clinical usage. The method is based on a magnetic tracking system, which is augmented by cross-correlation-based resampling and a two-stage moving window average algorithm to improve upon the tracker's limited intrinsic spatial resolution, achieving 18 µm RMS position accuracy. A proof-of-principle system was developed, with successful image reconstruction demonstrated on phantoms and on ex vivo human breast tissue validated against histology. This freehand scanning method could contribute toward clinical implementation of OCT needle imaging. PMID:22808429
Parameter estimation of an ARMA model for river flow forecasting using goal programming
NASA Astrophysics Data System (ADS)
Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene
2006-11-01
SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.
Estimating hydraulic properties using a moving-model approach and multiple aquifer tests
Halford, K.J.; Yobbi, D.
2006-01-01
A new method was developed for characterizing geohydrologic columns that extended >600 m deep at sites with as many as six discrete aquifers. This method was applied at 12 sites within the Southwest Florida Water Management District. Sites typically were equipped with multiple production wells, one for each aquifer and one or more observation wells per aquifer. The average hydraulic properties of the aquifers and confining units within radii of 30 to >300 m were characterized at each site. Aquifers were pumped individually and water levels were monitored in stressed and adjacent aquifers during each pumping event. Drawdowns at a site were interpreted using a radial numerical model that extended from land surface to the base of the geohydrologic column and simulated all pumping events. Conceptually, the radial model moves between stress periods and recenters on the production well during each test. Hydraulic conductivity was assumed homogeneous and isotropic within each aquifer and confining unit. Hydraulic property estimates for all of the aquifers and confining units were consistent and reasonable because results from multiple aquifers and pumping events were analyzed simultaneously. Copyright ?? 2005 National Ground Water Association.
Estimating hydraulic properties using a moving-model approach and multiple aquifer tests.
Halford, Keith J; Yobbi, Dann
2006-01-01
A new method was developed for characterizing geohydrologic columns that extended >600 m deep at sites with as many as six discrete aquifers. This method was applied at 12 sites within the Southwest Florida Water Management District. Sites typically were equipped with multiple production wells, one for each aquifer and one or more observation wells per aquifer. The average hydraulic properties of the aquifers and confining units within radii of 30 to >300 m were characterized at each site. Aquifers were pumped individually and water levels were monitored in stressed and adjacent aquifers during each pumping event. Drawdowns at a site were interpreted using a radial numerical model that extended from land surface to the base of the geohydrologic column and simulated all pumping events. Conceptually, the radial model moves between stress periods and recenters on the production well during each test. Hydraulic conductivity was assumed homogeneous and isotropic within each aquifer and confining unit. Hydraulic property estimates for all of the aquifers and confining units were consistent and reasonable because results from multiple aquifers and pumping events were analyzed simultaneously.
Tbahriti, Imad; Chichester, Christine; Lisacek, Frédérique; Ruch, Patrick
2006-06-01
The aim of this study is to investigate the relationships between citations and the scientific argumentation found abstracts. We design a related article search task and observe how the argumentation can affect the search results. We extracted citation lists from a set of 3200 full-text papers originating from a narrow domain. In parallel, we recovered the corresponding MEDLINE records for analysis of the argumentative moves. Our argumentative model is founded on four classes: PURPOSE, METHODS, RESULTS and CONCLUSION. A Bayesian classifier trained on explicitly structured MEDLINE abstracts generates these argumentative categories. The categories are used to generate four different argumentative indexes. A fifth index contains the complete abstract, together with the title and the list of Medical Subject Headings (MeSH) terms. To appraise the relationship of the moves to the citations, the citation lists were used as the criteria for determining relatedness of articles, establishing a benchmark; it means that two articles are considered as "related" if they share a significant set of co-citations. Our results show that the average precision of queries with the PURPOSE and CONCLUSION features is the highest, while the precision of the RESULTS and METHODS features was relatively low. A linear weighting combination of the moves is proposed, which significantly improves retrieval of related articles.
Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.
Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R
2018-06-01
The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P < 0.05). Five methods estimated more than 100% of actual steps; AG (no-LFE) on both wrists recorded 109% to 122% of steps (P > 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.
Limited transfer of long-term motion perceptual learning with double training.
Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili
2015-01-01
A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.
Tube Visualization and Properties from Isoconfigurational Averaging
NASA Astrophysics Data System (ADS)
Qin, Jian; Bisbee, Windsor; Milner, Scott
2012-02-01
We introduce a simulation method to visualize the confining tube in polymer melts and measure its properties. We studied bead-spring ring polymers, which conveniently suppresses constraint release and contour length fluctuations. We allow molecules to cross and reach topologically equilibrated states by invoking various molecular rebridging moves in Monte Carlo simulations. To reveal the confining tube, we start with a well equilibrated configuration, turn off rebridging moves, and run molecular dynamics simulation multiple times, each with different initial velocities. The resulting set of ``movies'' of molecular trajectories defines an isoconfigurational ensemble, with the bead positions at different times and in different ``movies'' giving rise to a cloud. The cloud shows the shape, range and strength of the tube confinement, which enables us to study the statistical properties of tube. Using this approach, we studied the effects of free surface, and found that the tube diameter near the surface is greater than the bulk value by about 25%.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.
La, Hung Manh; Sheng, Weihua
2013-04-01
In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.
Distractor Interference during Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Spering, Miriam; Gegenfurtner, Karl R.; Kerzel, Dirk
2006-01-01
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show…
Speckle techniques for determining stresses in moving objects
NASA Technical Reports Server (NTRS)
Murphree, E. A.; Wilson, T. F.; Ranson, W. F.; Swinson, W. F.
1978-01-01
Laser speckle interferometry is a relatively new experimental technique which shows promise of alleviating many difficult problems in experimental mechanics. The method utilizes simple high-resolution photographs of the surface which is illuminated by coherent light. The result is a real-time or permanently stored whole-field record of interference fringes which yields a map of displacements in the object. In this thesis, the time-average theory using the Fourier transform is developed to present the application of this technique to measurement of in-plane displacement induced by the vibration of an object.
Unsteady Aerodynamic Force Sensing from Strain Data
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2017-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.
Neural network-based run-to-run controller using exposure and resist thickness adjustment
NASA Astrophysics Data System (ADS)
Geary, Shane; Barry, Ronan
2003-06-01
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.
Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth
2015-03-01
The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.
[Spectral investigation of atmospheric pressure plasma column].
Li, Xue-Chen; Chang, Yuan-Yuan; Xu, Long-Fei
2012-07-01
Atmospheric pressure plasma column has many important applications in plasma stealth for aircraft. In the present paper, a plasma column with a length of 65 cm was generated in argon at atmospheric pressure by using dielectric barrier discharge device with water electrodes in coaxial configurations. The discharge mechanism of the plasma column was studied by optical method and the result indicates that a moving layer of light emission propagates in the upstream region. The propagation velocity of the plasma bullet is about 0.6 x 10(5) m x s(-1) through optical measurement. Spectral intensity ratios as functions of the applied voltage and driving frequency were also investigated by spectroscopic method. The variation in spectral intensity ratio implies a change in the averaged electron energy. Results show that the averaged electron energy increases with the increase in the applied voltage and the driving frequency. These results have significant values for industrial applications of the atmospheric pressure discharge and have extensive application potentials in stealth for military aircraft.
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
A charged particle in a homogeneous magnetic field accelerated by a time-periodic Aharonov-Bohm flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalvoda, T.; Stovicek, P., E-mail: stovicek@kmlinux.fjfi.cvut.cz
2011-10-15
We consider a nonrelativistic quantum charged particle moving on a plane under the influence of a uniform magnetic field and driven by a periodically time-dependent Aharonov-Bohm flux. We observe an acceleration effect in the case when the Aharonov-Bohm flux depends on time as a sinusoidal function whose frequency is in resonance with the cyclotron frequency. In particular, the energy of the particle increases linearly for large times. An explicit formula for the acceleration rate is derived with the aid of the quantum averaging method, and then it is checked against a numerical solution and a very good agreement is found.more » - Highlights: > A nonrelativistic quantum charged particle on a plane. > A homogeneous magnetic field and a periodically time-dependent Aharonov-Bohm flux. > The quantum averaging method applied to a time-dependent system. > A resonance of the AB flux with the cyclotron frequency. > An acceleration with linearly increasing energy; a formula for the acceleration rate.« less
Long-Term PM2.5 Exposure and Respiratory, Cancer, and Cardiovascular Mortality in Older US Adults.
Pun, Vivian C; Kazemiparkouhi, Fatemeh; Manjourides, Justin; Suh, Helen H
2017-10-15
The impact of chronic exposure to fine particulate matter (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM2.5)) on respiratory disease and lung cancer mortality is poorly understood. In a cohort of 18.9 million Medicare beneficiaries (4.2 million deaths) living across the conterminous United States between 2000 and 2008, we examined the association between chronic PM2.5 exposure and cause-specific mortality. We evaluated confounding through adjustment for neighborhood behavioral covariates and decomposition of PM2.5 into 2 spatiotemporal scales. We found significantly positive associations of 12-month moving average PM2.5 exposures (per 10-μg/m3 increase) with respiratory, chronic obstructive pulmonary disease, and pneumonia mortality, with risk ratios ranging from 1.10 to 1.24. We also found significant PM2.5-associated elevated risks for cardiovascular and lung cancer mortality. Risk ratios generally increased with longer moving averages; for example, an elevation in 60-month moving average PM2.5 exposures was linked to 1.33 times the lung cancer mortality risk (95% confidence interval: 1.24, 1.40), as compared with 1.13 (95% confidence interval: 1.11, 1.15) for 12-month moving average exposures. Observed associations were robust in multivariable models, although evidence of unmeasured confounding remained. In this large cohort of US elderly, we provide important new evidence that long-term PM2.5 exposure is significantly related to increased mortality from respiratory disease, lung cancer, and cardiovascular disease. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Effect of air pollution on pediatric respiratory emergency room visits and hospital admissions.
Farhat, S C L; Paulo, R L P; Shimoda, T M; Conceição, G M S; Lin, C A; Braga, A L F; Warth, M P N; Saldiva, P H N
2005-02-01
In order to assess the effect of air pollution on pediatric respiratory morbidity, we carried out a time series study using daily levels of PM10, SO2, NO2, ozone, and CO and daily numbers of pediatric respiratory emergency room visits and hospital admissions at the Children's Institute of the University of Sao Paulo Medical School, from August 1996 to August 1997. In this period there were 43,635 hospital emergency room visits, 4534 of which were due to lower respiratory tract disease. The total number of hospital admissions was 6785, 1021 of which were due to lower respiratory tract infectious and/or obstructive diseases. The three health end-points under investigation were the daily number of emergency room visits due to lower respiratory tract diseases, hospital admissions due to pneumonia, and hospital admissions due to asthma or bronchiolitis. Generalized additive Poisson regression models were fitted, controlling for smooth functions of time, temperature and humidity, and an indicator of weekdays. NO2 was positively associated with all outcomes. Interquartile range increases (65.04 microg/m3) in NO2 moving averages were associated with an 18.4% increase (95% confidence interval, 95% CI = 12.5-24.3) in emergency room visits due to lower respiratory tract diseases (4-day moving average), a 17.6% increase (95% CI = 3.3-32.7) in hospital admissions due to pneumonia or bronchopneumonia (3-day moving average), and a 31.4% increase (95% CI = 7.2-55.7) in hospital admissions due to asthma or bronchiolitis (2-day moving average). The study showed that air pollution considerably affects children's respiratory morbidity, deserving attention from the health authorities.
Dog days of summer: Influences on decision of wolves to move pups
Ausband, David E.; Mitchell, Michael S.; Bassing, Sarah B.; Nordhagen, Matthew; Smith, Douglas W.; Stahler, Daniel R.
2016-01-01
For animals that forage widely, protecting young from predation can span relatively long time periods due to the inability of young to travel with and be protected by their parents. Moving relatively immobile young to improve access to important resources, limit detection of concentrated scent by predators, and decrease infestations by ectoparasites can be advantageous. Moving young, however, can also expose them to increased mortality risks (e.g., accidents, getting lost, predation). For group-living animals that live in variable environments and care for young over extended time periods, the influence of biotic factors (e.g., group size, predation risk) and abiotic factors (e.g., temperature and precipitation) on the decision to move young is unknown. We used data from 25 satellite-collared wolves ( Canis lupus ) in Idaho, Montana, and Yellowstone National Park to evaluate how these factors could influence the decision to move pups during the pup-rearing season. We hypothesized that litter size, the number of adults in a group, and perceived predation risk would positively affect the number of times gray wolves moved pups. We further hypothesized that wolves would move their pups more often when it was hot and dry to ensure sufficient access to water. Contrary to our hypothesis, monthly temperature above the 30-year average was negatively related to the number of times wolves moved their pups. Monthly precipitation above the 30-year average, however, was positively related to the amount of time wolves spent at pup-rearing sites after leaving the natal den. We found little relationship between risk of predation (by grizzly bears, humans, or conspecifics) or group and litter sizes and number of times wolves moved their pups. Our findings suggest that abiotic factors most strongly influence the decision of wolves to move pups, although responses to unpredictable biotic events (e.g., a predator encountering pups) cannot be ruled out.
Scaling range of power laws that originate from fluctuation analysis
NASA Astrophysics Data System (ADS)
Grech, Dariusz; Mazur, Zygmunt
2013-05-01
We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.
Simultaneous Estimation of Electromechanical Modes and Forced Oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, Jim; Pierre, John W.; Martin, Russell
Over the past several years, great strides have been made in the effort to monitor the small-signal stability of power systems. These efforts focus on estimating electromechanical modes, which are a property of the system that dictate how generators in different parts of the system exchange energy. Though the algorithms designed for this task are powerful and important for reliable operation of the power system, they are susceptible to severe bias when forced oscillations are present in the system. Forced oscillations are fundamentally different from electromechanical oscillations in that they are the result of a rogue input to the system,more » rather than a property of the system itself. To address the presence of forced oscillations, the frequently used AutoRegressive Moving Average (ARMA) model is adapted to include sinusoidal inputs, resulting in the AutoRegressive Moving Average plus Sinusoid (ARMA+S) model. From this model, a new Two-Stage Least Squares algorithm is derived to incorporate the forced oscillations, thereby enabling the simultaneous estimation of the electromechanical modes and the amplitude and phase of the forced oscillations. The method is validated using simulated power system data as well as data obtained from the western North American power system (wNAPS) and Eastern Interconnection (EI).« less
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Graph-based structural change detection for rotating machinery monitoring
NASA Astrophysics Data System (ADS)
Lu, Guoliang; Liu, Jie; Yan, Peng
2018-01-01
Detection of structural changes is critically important in operational monitoring of a rotating machine. This paper presents a novel framework for this purpose, where a graph model for data modeling is adopted to represent/capture statistical dynamics in machine operations. Meanwhile we develop a numerical method for computing temporal anomalies in the constructed graphs. The martingale-test method is employed for the change detection when making decisions on possible structural changes, where excellent performance is demonstrated outperforming exciting results such as the autoregressive-integrated-moving average (ARIMA) model. Comprehensive experimental results indicate good potentials of the proposed algorithm in various engineering applications. This work is an extension of a recent result (Lu et al., 2017).
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David
2013-07-01
The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Work-related accidents among the Iranian population: a time series analysis, 2000-2011.
Karimlou, Masoud; Salehi, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood
2015-01-01
Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box-Jenkins modeling to develop a time series model of the total number of accidents. There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection.
A Pulse Rate Detection Method for Mouse Application Based on Multi-PPG Sensors
Chen, Wei-Hao
2017-01-01
Heart rate is an important physiological parameter for healthcare. Among measurement methods, photoplethysmography (PPG) is an easy and convenient method for pulse rate detection. However, as the PPG signal faces the challenge of motion artifacts and is constrained by the position chosen, the purpose of this paper is to implement a comfortable and easy-to-use multi-PPG sensor module combined with a stable and accurate real-time pulse rate detection method on a computer mouse. A weighted average method for multi-PPG sensors is used to adjust the weight of each signal channel in order to raise the accuracy and stability of the detected signal, therefore reducing the disturbance of noise under the environment of moving effectively and efficiently. According to the experiment results, the proposed method can increase the usability and probability of PPG signal detection on palms. PMID:28708112
Delbridge, Brent G.; Burgmann, Roland; Fielding, Eric; Hensley, Scott; Schulz, William
2016-01-01
In order to provide surface geodetic measurements with “landslide-wide” spatial coverage, we develop and validate a method for the characterization of 3-D surface deformation using the unique capabilities of the Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) airborne repeat-pass radar interferometry system. We apply our method at the well-studied Slumgullion Landslide, which is 3.9 km long and moves persistently at rates up to ∼2 cm/day. A comparison with concurrent GPS measurements validates this method and shows that it provides reliable and accurate 3-D surface deformation measurements. The UAVSAR-derived vector velocity field measurements accurately capture the sharp boundaries defining previously identified kinematic units and geomorphic domains within the landslide. We acquired data across the landslide during spring and summer and identify that the landslide moves more slowly during summer except at its head, presumably in response to spatiotemporal variations in snowmelt infiltration. In order to constrain the mechanics controlling landslide motion from surface velocity measurements, we present an inversion framework for the extraction of slide thickness and basal geometry from dense 3-D surface velocity fields. We find that the average depth of the Slumgullion Landslide is 7.5 m, several meters less than previous depth estimates. We show that by considering a viscoplastic rheology, we can derive tighter theoretical bounds on the rheological parameter relating mean horizontal flow rate to surface velocity. Using inclinometer data for slow-moving, clay-rich landslides across the globe, we find a consistent value for the rheological parameter of 0.85 ± 0.08.
Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony
2017-12-01
When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.
Yuan, Liming; Thomas, Rick; Iannacchione, Anthony
2017-01-01
When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Chengshan
2017-10-01
The effect of minor loop size on the magnetic stiffness has not been paid attention to by most researchers in experimental and theoretical studies about the high temperature superconductor (HTS) magnetic levitation system. In this work, we numerically investigate the average magnetic stiffness obtained by the minor loop traverses Δz (or Δx) varying from 0.1 mm to 2 mm in zero field cooling and field cooling regimes, respectively. The approximate values of the magnetic stiffness with zero traverse are obtained using the method of linear extrapolation. Compared with the average magnetic stiffness gained by any minor loop traverse, these approximate values are Not always close to the average magnetic stiffness produced by the smallest size of minor loops. The relative deviation ranges of average magnetic stiffness gained by the usually minor loop traverse (1 or 2 mm) are presented by the ratios of approximate values to average stiffness for different moving processes and two typical cooling conditions. The results show that most of average magnetic stiffness are remarkably influenced by the sizes of minor loop, which indicates that the magnetic stiffness obtained by a single minor loop traverse Δ z or Δ x, for example, 1 or 2 mm, can be generally caused a large deviation.
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Balas, M. J.
1980-01-01
A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
NASA Astrophysics Data System (ADS)
Levine, Zachary H.; Pintar, Adam L.
2015-11-01
A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.
Zhu, Yu; Xia, Jie-lai; Wang, Jing
2009-09-01
Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
Industrial Based Migration in India. A Case Study of Dumdum "Dunlop Industrial Zone"
NASA Astrophysics Data System (ADS)
Das, Biplab; Bandyopadhyay, Aditya; Sen, Jayashree
2012-10-01
Migration is a very important part in our present society. Basically Millions of people moved during the industrial revolution. Some simply moved from a village to a town in the hope of finding work whilst others moved from one country to another in search of a better way of life. The main reason for moving home during the 19th century was to find work. On one hand this involved migration from the countryside to the growing industrial cities, on the other it involved rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage,birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family.Migration was not just people moving out of the country, it also invloved a lot of people moving into Britain. In the 1840's Ireland suffered a terrible famine. Faced with a massive cost of feeding the starving population many local landowners paid for labourers to emigrate.There was a shift away from agriculturally based rural dwelling towards urban habitation to meet the mass demand for labour that new industry required. There became great regional differences in population levels and in the structure of their demography. This was due to rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage, birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family. There is n serious disagreement as to the extent of the population changes that occurred but one key question that always arouses debate is that of whether an expanding population resulted in economic growth or vice versa, i.e. was industrialization a catalyst for population growth? A clear answer is difficult to decipher as the two variables are so closely and fundamentally interlinked, but it seems that both factors provided impetus for each otherís take off. If anything, population and economic growth were complimentary towards one another rather than simply being causative factors.
Naqvi, Shahid A; D'Souza, Warren D
2005-04-01
Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.
The Micromechanics of the Moving Contact Line
NASA Technical Reports Server (NTRS)
Han, Minsub; Lichter, Seth; Lin, Chih-Yu; Perng, Yeong-Yan
1996-01-01
The proposed research is divided into three components concerned with molecular structure, molecular orientation, and continuum averages of discrete systems. In the experimental program, we propose exploring how changes in interfacial molecular structure generate contact line motion. Rather than rely on the electrostatic and electrokinetic fields arising from the molecules themselves, we augment their interactions by an imposed field at the solid/liquid interface. By controling the field, we can manipulate the molecular structure at the solid/liquid interface. In response to controlled changes in molecular structure, we observe the resultant contact line motion. In the analytical portion of the proposed research we seek to formulate a system of equations governing fluid motion which accounts for the orientation of fluid molecules. In preliminary work, we have focused on describing how molecular orientation affects the forces generated at the moving contact line. Ideally, as assumed above, the discrete behavior of molecules can be averaged into a continuum theory. In the numerical portion of the proposed research, we inquire whether the contact line region is, in fact, large enough to possess a well-defined average. Additionally, we ask what types of behavior distinguish discrete systems from continuum systems. Might the smallness of the contact line region, in itself, lead to behavior different from that in the bulk? Taken together, our proposed research seeks to identify and accurately account for some of the molecular dynamics of the moving contact line, and attempts to formulate a description from which one can compute the forces at the moving contact line.
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
Rossum, Huub H van; Kemperman, Hans
2017-07-26
General application of a moving average (MA) as continuous analytical quality control (QC) for routine chemistry assays has failed due to lack of a simple method that allows optimization of MAs. A new method was applied to optimize the MA for routine chemistry and was evaluated in daily practice as continuous analytical QC instrument. MA procedures were optimized using an MA bias detection simulation procedure. Optimization was graphically supported by bias detection curves. Next, all optimal MA procedures that contributed to the quality assurance were run for 100 consecutive days and MA alarms generated during working hours were investigated. Optimized MA procedures were applied for 24 chemistry assays. During this evaluation, 303,871 MA values and 76 MA alarms were generated. Of all alarms, 54 (71%) were generated during office hours. Of these, 41 were further investigated and were caused by ion selective electrode (ISE) failure (1), calibration failure not detected by QC due to improper QC settings (1), possible bias (significant difference with the other analyzer) (10), non-human materials analyzed (2), extreme result(s) of a single patient (2), pre-analytical error (1), no cause identified (20), and no conclusion possible (4). MA was implemented in daily practice as a continuous QC instrument for 24 routine chemistry assays. In our setup when an MA alarm required follow-up, a manageable number of MA alarms was generated that resulted in valuable MA alarms. For the management of MA alarms, several applications/requirements in the MA management software will simplify the use of MA procedures.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
An Open-Source Standard T-Wave Alternans Detector for Benchmarking.
Khaustov, A; Nemati, S; Clifford, Gd
2008-09-14
We describe an open source algorithm suite for T-Wave Alternans (TWA) detection and quantification. The software consists of Matlab implementations of the widely used Spectral Method and Modified Moving Average with libraries to read both WFDB and ASCII data under windows and Linux. The software suite can run in both batch mode and with a provided graphical user interface to aid waveform exploration. Our software suite was calibrated using an open source TWA model, described in a partner paper [1] by Clifford and Sameni. For the PhysioNet/CinC Challenge 2008 we obtained a score of 0.881 for the Spectral Method and 0.400 for the MMA method. However, our objective was not to provide the best TWA detector, but rather a basis for detailed discussion of algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X; Xiong, W; Gewanter, R
Purpose: Average or maximum intensity projection (AIP or MIP) images derived from 4DCT images are often used as a reference image for target alignment when free breathing Cone-beam CT (FBCBCT) is used for positioning a moving target at treatment. This method can be highly accurate if the patient has stable respiratory motion. However, a patient’s breathing pattern often varies irregularly. The purpose of this study is to investigate the effect of irregular respiration on the positioning accuracy of a moving target with FBCBCT. Methods: Eight patients’ respiratory motion curves were selected to drive a Quasar phantom with embedded cubic andmore » spherical targets. A 4DCT of the moving phantom was acquired on a CT scanner (Philips Brilliance 16) equipped with a Varian RPM system. The phase binned 4DCT images and the corresponding MIP and AIP images were transferred into Eclipse for analysis. CBCTs of the phantom driven by the same breathing curves were acquired on a Varian TrueBeam and fused such that the zero positions of moving targets are the same on both CBCT and AIP images. The sphere and cube volumes and centrioid differences (alignment error) determined by MIP, AIP and FBCBCT images were compared. Results: Compared to the volume determined by FBCBCT, the volumes of cube and sphere in MIP images were 22.4%±8.8% and 34.2%±6.2% larger while the volumes in AIP images were 7.1%±6.2% and 2.7%±15.3% larger, respectively. The alignment errors for the cube and sphere with center-center matches between MIP and FBCBCT were 3.5±3.1mm and 3.2±2.3mm, and the alignment errors between AIP and FBCBCT were 2.1±2.6mm and 2.1±1.7mm, respectively. Conclusion: AIP images appear to be superior reference images than MIP images. However, irregular respiratory motions could compromise the positioning accuracy of a moving target if the target center-center match is used to align FBCBCT and AIP images.« less
Mechanical break junctions: enormous information in a nanoscale package.
Natelson, Douglas
2012-04-24
Mechanical break junctions, particularly those in which a metal tip is repeatedly moved in and out of contact with a metal film, have provided many insights into electronic conduction at the atomic and molecular scale, most often by averaging over many possible junction configurations. This averaging throws away a great deal of information, and Makk et al. in this issue of ACS Nano demonstrate that, with both simulated and real experimental data, more sophisticated two-dimensional analysis methods can reveal information otherwise obscured in simple histograms. As additional measured quantities come into play in break junction experiments, including thermopower, noise, and optical response, these more sophisticated analytic approaches are likely to become even more powerful. While break junctions are not directly practical for useful electronic devices, they are incredibly valuable tools for unraveling the electronic transport physics relevant for ultrascaled nanoelectronics.
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.
2009-12-01
We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Damage evaluation by a guided wave-hidden Markov model based method
NASA Astrophysics Data System (ADS)
Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin
2016-02-01
Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
NASA Astrophysics Data System (ADS)
Watanabe, Koji; Matsuno, Kenichi
This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.
2003-01-01
The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.
Robust skin color-based moving object detection for video surveillance
NASA Astrophysics Data System (ADS)
Kaliraj, Kalirajan; Manimaran, Sudha
2016-07-01
Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.
Weston, Victoria C.; Meurer, William J.; Frederiksen, Shirley M.; Fox, Allison K.; Scott, Phillip A.
2016-01-01
Objectives Cluster randomized trials (CRTs) are increasingly utilized to evaluate quality improvement interventions aimed at healthcare providers. In trials testing emergency department interventions, migration of emergency physicians (EPs) between hospitals is an important concern, as contamination may affect both internal and external validity. We hypothesized that geographically isolating emergency departments would prevent migratory contamination in a CRT designed to increase ED delivery of tPA in stroke (The INSTINCT Trial). Methods INSTINCT was a prospective, cluster randomized, controlled trial. 24 Michigan community hospitals were randomly selected in matched pairs for study. Contamination was defined at the cluster level, with substantial contamination defined a priori as >10% of EPs affected. Non-adherence, total crossover (contamination + non-adherence), migration distance and characteristics were determined. Results 307 emergency physicians were identified at all sites. Overall, 7 (2.3%) changed study sites. 1 moved between control sites, leaving 6 (2.0%) total crossovers. Of these, 2 (0.7%) moved from intervention to control (contamination) and 4 (1.3%) moved from control to intervention (non-adherence). Contamination was observed in 2 of 12 control sites, with 17% and 9% contamination of the total site EP workforce at follow-up, respectively. Average migration distance was 42 miles for all EPs moving in the study and 35 miles for EPs moving from intervention to control sites. Conclusion The mobile nature of emergency physicians should be considered in the design of quality improvement CRTs. Increased reporting of contamination in CRTs is encouraged to clarify thresholds and facilitate CRT design. PMID:25440230
ERIC Educational Resources Information Center
Adams, Gerald J.; Dial, Micah
1998-01-01
The cyclical nature of mathematics grades was studied for a cohort of elementary school students from a large metropolitan school district in Texas over six years (average cohort size of 8495). The study used an autoregressive integrated moving average (ARIMA) model. Results indicate that grades do exhibit a significant cyclical pattern. (SLD)
Evidence of redshifts in the average solar line profiles of C IV and Si IV from OSO-8 observations
NASA Technical Reports Server (NTRS)
Roussel-Dupre, D.; Shine, R. A.
1982-01-01
Line profiles of C IV and Si V obtained by the Colorado spectrometer on OSO-8 are presented. It is shown that the mean profiles are redshifted with a magnitude varying from 6-20 km/s, and with a mean of 12 km/s. An apparent average downflow of material in the 50,000-100,000 K temperature range is measured. The redshifts are observed in the line center positions of spatially and temporally averaged profiles and are measured either relative to chromospheric Si I lines or from a comparison of sun center and limb profiles. The observations of 6-20 km/s redshifts place constraints on the mechanisms that dominate EUV line emission since it requires a strong weighting of the emission in regions of downward moving material, and since there is little evidence for corresponding upward moving materials in these lines.
Automatic load forecasting. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.J.; Vemuri, S.
A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less
Accounting for seasonal patterns in syndromic surveillance data for outbreak detection.
Burr, Tom; Graves, Todd; Klamann, Richard; Michalak, Sarah; Picard, Richard; Hengartner, Nicolas
2006-12-04
Syndromic surveillance (SS) can potentially contribute to outbreak detection capability by providing timely, novel data sources. One SS challenge is that some syndrome counts vary with season in a manner that is not identical from year to year. Our goal is to evaluate the impact of inconsistent seasonal effects on performance assessments (false and true positive rates) in the context of detecting anomalous counts in data that exhibit seasonal variation. To evaluate the impact of inconsistent seasonal effects, we injected synthetic outbreaks into real data and into data simulated from each of two models fit to the same real data. Using real respiratory syndrome counts collected in an emergency department from 2/1/94-5/31/03, we varied the length of training data from one to eight years, applied a sequential test to the forecast errors arising from each of eight forecasting methods, and evaluated their detection probabilities (DP) on the basis of 1000 injected synthetic outbreaks. We did the same for each of two corresponding simulated data sets. The less realistic, nonhierarchical model's simulated data set assumed that "one season fits all," meaning that each year's seasonal peak has the same onset, duration, and magnitude. The more realistic simulated data set used a hierarchical model to capture violation of the "one season fits all" assumption. This experiment demonstrated optimistic bias in DP estimates for some of the methods when data simulated from the nonhierarchical model was used for DP estimation, thus suggesting that at least for some real data sets and methods, it is not adequate to assume that "one season fits all." For the data we analyze, the "one season fits all " assumption is violated, and DP performance claims based on simulated data that assume "one season fits all," for the forecast methods considered, except for moving average methods, tend to be optimistic. Moving average methods based on relatively short amounts of training data are competitive on all three data sets, but are particularly competitive on the real data and on data from the hierarchical model, which are the two data sets that violate the "one season fits all" assumption.
Lian, Yanyun; Song, Zhijian
2014-01-01
Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1995-10-17
A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1997-05-13
A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.
System for monitoring an industrial process and determining sensor status
Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.
1995-01-01
A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
System for monitoring an industrial process and determining sensor status
Gross, Kenneth C.; Hoyer, Kristin K.; Humenik, Keith E.
1997-01-01
A method and system for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
Performance of univariate forecasting on seasonal diseases: the case of tuberculosis.
Permanasari, Adhistya Erna; Rambli, Dayang Rohaya Awang; Dominic, P Dhanapal Durai
2011-01-01
The annual disease incident worldwide is desirable to be predicted for taking appropriate policy to prevent disease outbreak. This chapter considers the performance of different forecasting method to predict the future number of disease incidence, especially for seasonal disease. Six forecasting methods, namely linear regression, moving average, decomposition, Holt-Winter's, ARIMA, and artificial neural network (ANN), were used for disease forecasting on tuberculosis monthly data. The model derived met the requirement of time series with seasonality pattern and downward trend. The forecasting performance was compared using similar error measure in the base of the last 5 years forecast result. The findings indicate that ARIMA model was the most appropriate model since it obtained the less relatively error than the other model.
Distractor interference during smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R; Kerzel, Dirk
2006-10-01
When 2 targets for pursuit eye movements move in different directions, the eye velocity follows the vector average (S. G. Lisberger & V. P. Ferrera, 1997). The present study investigates the mechanisms of target selection when observers are instructed to follow a predefined horizontal target and to ignore a moving distractor stimulus. Results show that at 140 ms after distractor onset, horizontal eye velocity is decreased by about 25%. Vertical eye velocity increases or decreases by 1 degrees /s in the direction opposite from the distractor. This deviation varies in size with distractor direction, velocity, and contrast. The effect was present during the initiation and steady-state tracking phase of pursuit but only when the observer had prior information about target motion. Neither vector averaging nor winner-take-all models could predict the response to a moving to-be-ignored distractor during steady-state tracking of a predefined target. The contributions of perceptual mislocalization and spatial attention to the vertical deviation in pursuit are discussed. Copyright 2006 APA.
An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Han, Jianqiang; Tang, Huazhong
2007-01-01
This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.
A recursive technique for adaptive vector quantization
NASA Technical Reports Server (NTRS)
Lindsay, Robert A.
1989-01-01
Vector Quantization (VQ) is fast becoming an accepted, if not preferred method for image compression. The VQ performs well when compressing all types of imagery including Video, Electro-Optical (EO), Infrared (IR), Synthetic Aperture Radar (SAR), Multi-Spectral (MS), and digital map data. The only requirement is to change the codebook to switch the compressor from one image sensor to another. There are several approaches for designing codebooks for a vector quantizer. Adaptive Vector Quantization is a procedure that simultaneously designs codebooks as the data is being encoded or quantized. This is done by computing the centroid as a recursive moving average where the centroids move after every vector is encoded. When computing the centroid of a fixed set of vectors the resultant centroid is identical to the previous centroid calculation. This method of centroid calculation can be easily combined with VQ encoding techniques. The defined quantizer changes after every encoded vector by recursively updating the centroid of minimum distance which is the selected by the encoder. Since the quantizer is changing definition or states after every encoded vector, the decoder must now receive updates to the codebook. This is done as side information by multiplexing bits into the compressed source data.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
Hinds, Aynslie M; Bechtel, Brian; Distasio, Jino; Roos, Leslie L; Lix, Lisa M
2018-06-05
Residence in public housing, a subsidized and managed government program, may affect health and healthcare utilization. We compared healthcare use in the year before individuals moved into public housing with usage during their first year of tenancy. We also described trends in use. We used linked population-based administrative data housed in the Population Research Data Repository at the Manitoba Centre for Health Policy. The cohort consisted of individuals who moved into public housing in 2009 and 2010. We counted the number of hospitalizations, general practitioner (GP) visits, specialist visits, emergency department visits, and prescriptions drugs dispensed in the twelve 30-day intervals (i.e., months) immediately preceding and following the public housing move-in date. Generalized linear models with generalized estimating equations tested for a period (pre/post-move-in) by month interaction. Odds ratios (ORs), incident rate ratios (IRRs), and means are reported along with 95% confidence intervals (95% CIs). The cohort included 1942 individuals; the majority were female (73.4%) who lived in low income areas and received government assistance (68.1%). On average, the cohort had more than four health conditions. Over the 24 30-day intervals, the percentage of the cohort that visited a GP, specialist, and an emergency department ranged between 37.0% and 43.0%, 10.0% and 14.0%, and 6.0% and 10.0%, respectively, while the percentage of the cohort hospitalized ranged from 1.0% to 5.0%. Generally, these percentages were highest in the few months before the move-in date and lowest in the few months after the move-in date. The period by month interaction was statistically significant for hospitalizations, GP visits, and prescription drug use. The average change in the odds, rate, or mean was smaller in the post-move-in period than in the pre-move-in period. Use of some healthcare services declined after people moved into public housing; however, the decrease was only observed in the first few months and utilization rebounded. Knowledge of healthcare trends before individuals move in are informative for ensuring the appropriate supports are available to new public housing residents. Further study is needed to determine if decreased healthcare utilization following a move is attributable to decreased access.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Yulong; Shu, Chi-wang; Noelle, Sebastian
This note aims at demonstrating the advantage of moving-water well-balanced schemes over still-water well-balanced schemes for the shallow water equations. We concentrate on numerical examples with solutions near a moving-water equilibrium. For such examples, still-water well-balanced methods are not capable of capturing the small perturbations of the moving-water equilibrium and may generate significant spurious oscillations, unless an extremely refined mesh is used. On the other hand, moving-water well-balanced methods perform well in these tests. The numerical examples in this note clearly demonstrate the importance of utilizing moving-water well-balanced methods for solutions near a moving-water equilibrium.
The change of sleeping and lying posture of Japanese black cows after moving into new environment.
Fukasawa, Michiru; Komatsu, Tokushi; Higashiyama, Yumi
2018-04-25
The environmental change is one of the stressful events in livestock production. Change in environment disturbed cow behavior and cows needed several days to reach stable behavioral pattern, especially sleeping posture (SP) and lying posture (LP) have been used as an indicator for relax and well-acclimated to its environment. The aim of this study examines how long does Japanese black cow required for stabilization of SP and LP after moving into new environment. Seven pregnant Japanese black cows were used. Cows were moved into new tie-stall shed and measured sleeping and lying posture 17 times during 35 experimental days. Both SP and LP were detected by accelerometer fixed on middle occipital and hip-cross, respectively. Daily total time, frequency, and average bout of both SP and LP were calculated. Daily SP time was the shortest on day 1, and increased to the highest on day3. It decreased until day 9, after that stabilized about 65 min /day till the end of experiment. The longest average SP bout was shown on day 1, and it decreased to stabilize till day 7. Daily LP time was changed as same manner as daily SP time. The average SP bout showed the longest on day 1, and it decreased to stable level till day 7. On the other hand, the average LP bout showed the shortest on day1, and it was increased to stable level till on day 7. These results showed that pregnant Japanese black cows needed 1 week to stabilize their SP. However, there were different change pattern between the average SP and LP bout, even though the change pattern of daily SP and LP time were similar.
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Ribeiro, Haroldo V; Mendes, Renio S; Lenzi, Ervin K; del Castillo-Mussot, Marcelo; Amaral, Luís A N
2013-01-01
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player's advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player's advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent [Formula: see text] characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments.
Ribeiro, Haroldo V.; Mendes, Renio S.; Lenzi, Ervin K.; del Castillo-Mussot, Marcelo; Amaral, Luís A. N.
2013-01-01
The complexity of chess matches has attracted broad interest since its invention. This complexity and the availability of large number of recorded matches make chess an ideal model systems for the study of population-level learning of a complex system. We systematically investigate the move-by-move dynamics of the white player’s advantage from over seventy thousand high level chess matches spanning over 150 years. We find that the average advantage of the white player is positive and that it has been increasing over time. Currently, the average advantage of the white player is 0.17 pawns but it is exponentially approaching a value of 0.23 pawns with a characteristic time scale of 67 years. We also study the diffusion of the move dependence of the white player’s advantage and find that it is non-Gaussian, has long-ranged anti-correlations and that after an initial period with no diffusion it becomes super-diffusive. We find that the duration of the non-diffusive period, corresponding to the opening stage of a match, is increasing in length and exponentially approaching a value of 15.6 moves with a characteristic time scale of 130 years. We interpret these two trends as a resulting from learning of the features of the game. Additionally, we find that the exponent characterizing the super-diffusive regime is increasing toward a value of 1.9, close to the ballistic regime. We suggest that this trend is due to the increased broadening of the range of abilities of chess players participating in major tournaments. PMID:23382876
NASA Astrophysics Data System (ADS)
Jerome, N. P.; Orton, M. R.; d'Arcy, J. A.; Feiweier, T.; Tunariu, N.; Koh, D.-M.; Leach, M. O.; Collins, D. J.
2015-01-01
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Jerome, N P; Orton, M R; d'Arcy, J A; Feiweier, T; Tunariu, N; Koh, D-M; Leach, M O; Collins, D J
2015-01-21
Respiratory motion commonly confounds abdominal diffusion-weighted magnetic resonance imaging, where averaging of successive samples at different parts of the respiratory cycle, performed in the scanner, manifests the motion as blurring of tissue boundaries and structural features and can introduce bias into calculated diffusion metrics. Storing multiple averages separately allows processing using metrics other than the mean; in this prospective volunteer study, median and trimmed mean values of signal intensity for each voxel over repeated averages and diffusion-weighting directions are shown to give images with sharper tissue boundaries and structural features for moving tissues, while not compromising non-moving structures. Expert visual scoring of derived diffusion maps is significantly higher for the median than for the mean, with modest improvement from the trimmed mean. Diffusion metrics derived from mono- and bi-exponential diffusion models are comparable for non-moving structures, demonstrating a lack of introduced bias from using the median. The use of the median is a simple and computationally inexpensive alternative to complex and expensive registration algorithms, requiring only additional data storage (and no additional scanning time) while returning visually superior images that will facilitate the appropriate placement of regions-of-interest when analysing abdominal diffusion-weighted magnetic resonance images, for assessment of disease characteristics and treatment response.
Watson, J T; Ritzmann, R E
1998-01-01
We have combined high-speed video motion analysis of leg movements with electromyogram (EMG) recordings from leg muscles in cockroaches running on a treadmill. The mesothoracic (T2) and metathoracic (T3) legs have different kinematics. While in each leg the coxa-femur (CF) joint moves in unison with the femurtibia (FT) joint, the relative joint excursions differ between T2 and T3 legs. In T3 legs, the two joints move through approximately the same excursion. In T2 legs, the FT joint moves through a narrower range of angles than the CF joint. In spite of these differences in motion, no differences between the T2 and T3 legs were seen in timing or qualitative patterns of depressor coxa and extensor tibia activity. The average firing frequencies of slow depressor coxa (Ds) and slow extensor tibia (SETi) motor neurons are directly proportional to the average angular velocity of their joints during stance. The average Ds and SETi firing frequency appears to be modulated on a cycle-by-cycle basis to control running speed and orientation. In contrast, while the frequency variations within Ds and SETi bursts were consistent across cycles, the variations within each burst did not parallel variations in the velocity of the relevant joints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batin, E; Depauw, N; MacDonald, S
Purpose: Historically, the set-up for proton post-mastectomy chestwall irradiation at our institution started with positioning the patient using tattoos and lasers. One or more rounds of orthogonal X-rays at gantry 0° and beamline X-ray at treatment gantry angle were then taken to finalize the set-up position. As chestwall targets are shallow and superficial, surface imaging is a promising tool for set-up and needs to be investigated Methods: The orthogonal imaging was entirely replaced by AlignRT™ (ART) images. The beamline X-Ray image is kept as a confirmation, based primarily on three opaque markers placed on skin surface instead of bony anatomy.more » In the first phase of the process, ART gated images were used to set-up the patient and the same specific point of the breathing curve was used every day. The moves (translations and rotations) computed for each point of the breathing curve during the first five fractions were analyzed for ten patients. During a second phase of the study, ART gated images were replaced by ART non-gated images combined with real-time monitoring. In both cases, ART images were acquired just before treatment to access the patient position compare to the non-gated CT. Results: The average difference between the maximum move and the minimum move depending on the chosen breathing curve point was less than 1.7 mm for all translations and less than 0.7° for all rotations. The average position discrepancy over the course of treatment obtained by ART non gated images combined to real-time monitoring taken before treatment to the planning CT were smaller than the average position discrepancy obtained using ART gated images. The X-Ray validation images show similar results with both ART imaging process. Conclusion: The use of ART non gated images combined with real time imaging allows positioning post-mastectomy chestwall patients in less than 3 mm / 1°.« less
Monitoring Poisson observations using combined applications of Shewhart and EWMA charts
NASA Astrophysics Data System (ADS)
Abujiya, Mu'azu Ramat
2017-11-01
The Shewhart and exponentially weighted moving average (EWMA) charts for nonconformities are the most widely used procedures of choice for monitoring Poisson observations in modern industries. Individually, the Shewhart EWMA charts are only sensitive to large and small shifts, respectively. To enhance the detection abilities of the two schemes in monitoring all kinds of shifts in Poisson count data, this study examines the performance of combined applications of the Shewhart, and EWMA Poisson control charts. Furthermore, the study proposes modifications based on well-structured statistical data collection technique, ranked set sampling (RSS), to detect shifts in the mean of a Poisson process more quickly. The relative performance of the proposed Shewhart-EWMA Poisson location charts is evaluated in terms of the average run length (ARL), standard deviation of the run length (SDRL), median run length (MRL), average ratio ARL (ARARL), average extra quadratic loss (AEQL) and performance comparison index (PCI). Consequently, all the new Poisson control charts based on RSS method are generally more superior than most of the existing schemes for monitoring Poisson processes. The use of these combined Shewhart-EWMA Poisson charts is illustrated with an example to demonstrate the practical implementation of the design procedure.
NASA Astrophysics Data System (ADS)
Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng
2018-02-01
The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.
FARMWORKERS, A REPRINT FROM THE 1966 MANPOWER REPORT.
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC.
ALTHOUGH THE AVERAGE STANDARD OF LIVING OF FARM PEOPLE HAS BEEN RISING STEADILY, THEY CONTINUE TO FACE SEVERE PROBLEMS OF UNDEREMPLOYMENT AND POVERTY. THE AVERAGE PER CAPITA INCOME OF FARM RESIDENTS IS LESS THAN TWO-THIRDS THAT OF THE NONFARM POPULATION. MILLIONS HAVE MOVED TO CITIES, LEAVING STAGNATING RURAL COMMUNITIES, AND INCREASING THE CITY…
Severe Weather Guide - Mediterranean Ports. 7. Marseille
1988-03-01
the afternoon. Upper—level westerlies and the associated storm track is moved northward during summer, so extratropical cyclones and associated...autumn as the extratropical storm track moves southward. Precipitation amount is the highest of the year, with an average of 3 inches (76 mm) for the...18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Storm haven Mediterranean meteorology Marseille port
Polymer Coatings Degradation Properties
1985-02-01
undertaken 124). The Box-Jenkins approach first evaluates the partial auto -correlation function and determines the order of the moving average memory function...78 - Tables 15 and 16 show the resalit- f- a, the partial auto correlation plots. Second order moving .-. "ra ;;th -he appropriate lags were...coated films. Kaempf, Guenter; Papenroth, Wolfgang; Kunststoffe Date: 1982 Volume: 72 Number:7 Pages: 424-429 Parameters influencing the accelerated
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.
2005-01-01
We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.
Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.
Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A
2016-01-26
Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing
2017-02-01
UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.
NASA Technical Reports Server (NTRS)
Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.
1996-01-01
Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.
Sequential motion of the ossicular chain measured by laser Doppler vibrometry.
Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Fujiwara, Kazunori; Takeuchi, Hiromi
2017-12-01
In order to help a surgeon make the best decision, a more objective method of measuring ossicular motion is required. A laser Doppler vibrometer was mounted on a surgical microscope. To measure ossicular chain vibrations, eight patients with cochlear implants were investigated. To assess the motions of the ossicular chain, velocities at five points were measured with tonal stimuli of 1 and 3 kHz, which yielded reproducible results. The sequential amplitude change at each point was calculated with phase shifting from the tonal stimulus. Motion of the ossicular chain was visualized from the averaged results using the graphics application. The head of the malleus and the body of the incus showed synchronized movement as one unit. In contrast, the stapes (incudostapedial joint and posterior crus) moved synchronously in opposite phase to the malleus and incus. The amplitudes at 1 kHz were almost twice those at 3 kHz. Our results show that the malleus and incus unit and the stapes move with a phase difference.
DNA conformation on surfaces measured by fluorescence self-interference.
Moiseev, Lev; Unlü, M Selim; Swan, Anna K; Goldberg, Bennett B; Cantor, Charles R
2006-02-21
The conformation of DNA molecules tethered to the surface of a microarray may significantly affect the efficiency of hybridization. Although a number of methods have been applied to determine the structure of the DNA layer, they are not very sensitive to variations in the shape of DNA molecules. Here we describe the application of an interferometric technique called spectral self-interference fluorescence microscopy to the precise measurement of the average location of a fluorescent label in a DNA layer relative to the surface and thus determine specific information on the conformation of the surface-bound DNA molecules. Using spectral self-interference fluorescence microscopy, we have estimated the shape of coiled single-stranded DNA, the average tilt of double-stranded DNA of different lengths, and the amount of hybridization. The data provide important proofs of concept for the capabilities of novel optical surface analytical methods of the molecular disposition of DNA on surfaces. The determination of DNA conformations on surfaces and hybridization behavior provide information required to move DNA interfacial applications forward and thus impact emerging clinical and biotechnological fields.
Food price seasonality in Africa: Measurement and extent.
Gilbert, Christopher L; Christiaensen, Luc; Kaminski, Jonathan
2017-02-01
Everyone knows about seasonality. But what exactly do we know? This study systematically measures seasonal price gaps at 193 markets for 13 food commodities in seven African countries. It shows that the commonly used dummy variable or moving average deviation methods to estimate the seasonal gap can yield substantial upward bias. This can be partially circumvented using trigonometric and sawtooth models, which are more parsimonious. Among staple crops, seasonality is highest for maize (33 percent on average) and lowest for rice (16½ percent). This is two and a half to three times larger than in the international reference markets. Seasonality varies substantially across market places but maize is the only crop in which there are important systematic country effects. Malawi, where maize is the main staple, emerges as exhibiting the most acute seasonal differences. Reaching the Sustainable Development Goal of Zero Hunger requires renewed policy attention to seasonality in food prices and consumption.
Improved modified energy ratio method using a multi-window approach for accurate arrival picking
NASA Astrophysics Data System (ADS)
Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun
2017-04-01
To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.
Monthly reservoir inflow forecasting using a new hybrid SARIMA genetic programming approach
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Ebtehaj, Isa
2017-03-01
Forecasting reservoir inflow is one of the most important components of water resources and hydroelectric systems operation management. Seasonal autoregressive integrated moving average (SARIMA) models have been frequently used for predicting river flow. SARIMA models are linear and do not consider the random component of statistical data. To overcome this shortcoming, monthly inflow is predicted in this study based on a combination of seasonal autoregressive integrated moving average (SARIMA) and gene expression programming (GEP) models, which is a new hybrid method (SARIMA-GEP). To this end, a four-step process is employed. First, the monthly inflow datasets are pre-processed. Second, the datasets are modelled linearly with SARIMA and in the third stage, the non-linearity of residual series caused by linear modelling is evaluated. After confirming the non-linearity, the residuals are modelled in the fourth step using a gene expression programming (GEP) method. The proposed hybrid model is employed to predict the monthly inflow to the Jamishan Dam in west Iran. Thirty years' worth of site measurements of monthly reservoir dam inflow with extreme seasonal variations are used. The results of this hybrid model (SARIMA-GEP) are compared with SARIMA, GEP, artificial neural network (ANN) and SARIMA-ANN models. The results indicate that the SARIMA-GEP model ( R 2=78.8, VAF =78.8, RMSE =0.89, MAPE =43.4, CRM =0.053) outperforms SARIMA and GEP and SARIMA-ANN ( R 2=68.3, VAF =66.4, RMSE =1.12, MAPE =56.6, CRM =0.032) displays better performance than the SARIMA and ANN models. A comparison of the two hybrid models indicates the superiority of SARIMA-GEP over the SARIMA-ANN model.
Ward, S T; Hancox, A; Mohammed, M A; Ismail, T; Griffiths, E A; Valori, R; Dunckley, P
2017-06-01
The aim of this study was to determine the number of OGDs (oesophago-gastro-duodenoscopies) trainees need to perform to acquire competency in terms of successful unassisted completion to the second part of the duodenum 95% of the time. OGD data were retrieved from the trainee e-portfolio developed by the Joint Advisory Group on GI Endoscopy (JAG) in the UK. All trainees were included unless they were known to have a baseline experience of >20 procedures or had submitted data for <20 procedures. The primary outcome measure was OGD completion, defined as passage of the endoscope to the second part of the duodenum without physical assistance. The number of OGDs required to achieve a 95% completion rate was calculated by the moving average method and learning curve cumulative summation (LC-Cusum) analysis. To determine which factors were independently associated with OGD completion, a mixed effects logistic regression model was constructed with OGD completion as the outcome variable. Data were analysed for 1255 trainees over 288 centres, representing 243 555 OGDs. By moving average method, trainees attained a 95% completion rate at 187 procedures. By LC-Cusum analysis, after 200 procedures, >90% trainees had attained a 95% completion rate. Total number of OGDs performed, trainee age and experience in lower GI endoscopy were factors independently associated with OGD completion. There are limited published data on the OGD learning curve. This is the largest study to date analysing the learning curve for competency acquisition. The JAG competency requirement for 200 procedures appears appropriate. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Armentrout, G.W.; Larson, L.R.
1984-01-01
Time-of-travel and dispersion measurements made during a dye study November 7-8, 1978, are presented for a reach of the North Platte River from Casper, Wyo., to a bridge 2 miles downstream from below the Dave Johnston Power Plant. Rhodamine WT dye was injected into the river at Casper, and the resultant dye cloud was traced by sampling as it moved downstream. Samples were taken in three equal-flow sections of the river 's lateral transect at three sites, then analyzed in a fluorometer. The flow in the river was 940 cubic feet per second. The data consist of measured stream mileages and time, distance, and concentration graphs of the dye cloud. The peak concentration traveled through the reach in 24 hours, averaging 1.5 miles per hour; the leading edge took about 22 hours, averaging 1.7 miles per hour; and the trailing edge took 35 hours, averaging 1.0 mile per hour. Data from this study were compared with methods for estimating time of travel for a range of stream discharges.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
David W. Williams; Guohong Li; Ruitong Gao
2004-01-01
Movements of 55 Anoplophora glabripennis (Motschulsky) adults were monitored on 200 willow trees, Salix babylonica L., at a site appx. 80 km southeast of Beijing, China, for 9-14 d in an individual mark-recapture study using harmonic radar. The average movement distance was appx. 14 m, with many beetles not moving at all and others moving >90 m. The rate of movement...
ERIC Educational Resources Information Center
Huang, Min-Hsiung
2009-01-01
Reports of international studies of student achievement often receive public attention worldwide. However, this attention overly focuses on the national rankings of average student performance. To move beyond the simplistic comparison of national mean scores, this study investigates (a) country differences in the measures of variability as well as…
Comparing methods for modelling spreading cell fronts.
Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E
2014-07-21
Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simulations of moving effect of coastal vegetation on tsunami damping
NASA Astrophysics Data System (ADS)
Tsai, Ching-Piao; Chen, Ying-Chi; Octaviani Sihombing, Tri; Lin, Chang
2017-05-01
A coupled wave-vegetation simulation is presented for the moving effect of the coastal vegetation on tsunami wave height damping. The problem is idealized by solitary wave propagation on a group of emergent cylinders. The numerical model is based on general Reynolds-averaged Navier-Stokes equations with renormalization group turbulent closure model by using volume of fluid technique. The general moving object (GMO) model developed in computational fluid dynamics (CFD) code Flow-3D is applied to simulate the coupled motion of vegetation with wave dynamically. The damping of wave height and the turbulent kinetic energy along moving and stationary cylinders are discussed. The simulated results show that the damping of wave height and the turbulent kinetic energy by the moving cylinders are clearly less than by the stationary cylinders. The result implies that the wave decay by the coastal vegetation may be overestimated if the vegetation was represented as stationary state.
Applications of Space-Filling-Curves to Cartesian Methods for CFD
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Murman, S. M.; Berger, M. J.
2003-01-01
This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.
An Extension of the Time-Spectral Method to Overset Solvers
NASA Technical Reports Server (NTRS)
Leffell, Joshua Isaac; Murman, Scott M.; Pulliam, Thomas
2013-01-01
Relative motion in the Cartesian or overset framework causes certain spatial nodes to move in and out of the physical domain as they are dynamically blanked by moving solid bodies. This poses a problem for the conventional Time-Spectral approach, which expands the solution at every spatial node into a Fourier series spanning the period of motion. The proposed extension to the Time-Spectral method treats unblanked nodes in the conventional manner but expands the solution at dynamically blanked nodes in a basis of barycentric rational polynomials spanning partitions of contiguously defined temporal intervals. Rational polynomials avoid Runge's phenomenon on the equidistant time samples of these sub-periodic intervals. Fourier- and rational polynomial-based differentiation operators are used in tandem to provide a consistent hybrid Time-Spectral overset scheme capable of handling relative motion. The hybrid scheme is tested with a linear model problem and implemented within NASA's OVERFLOW Reynolds-averaged Navier- Stokes (RANS) solver. The hybrid Time-Spectral solver is then applied to inviscid and turbulent RANS cases of plunging and pitching airfoils and compared to time-accurate and experimental data. A limiter was applied in the turbulent case to avoid undershoots in the undamped turbulent eddy viscosity while maintaining accuracy. The hybrid scheme matches the performance of the conventional Time-Spectral method and converges to the time-accurate results with increased temporal resolution.
Computational aeroelasticity using a pressure-based solver
NASA Astrophysics Data System (ADS)
Kamakoti, Ramji
A computational methodology for performing fluid-structure interaction computations for three-dimensional elastic wing geometries is presented. The flow solver used is based on an unsteady Reynolds-Averaged Navier-Stokes (RANS) model. A well validated k-ε turbulence model with wall function treatment for near wall region was used to perform turbulent flow calculations. Relative merits of alternative flow solvers were investigated. The predictor-corrector-based Pressure Implicit Splitting of Operators (PISO) algorithm was found to be computationally economic for unsteady flow computations. Wing structure was modeled using Bernoulli-Euler beam theory. A fully implicit time-marching scheme (using the Newmark integration method) was used to integrate the equations of motion for structure. Bilinear interpolation and linear extrapolation techniques were used to transfer necessary information between fluid and structure solvers. Geometry deformation was accounted for by using a moving boundary module. The moving grid capability was based on a master/slave concept and transfinite interpolation techniques. Since computations were performed on a moving mesh system, the geometric conservation law must be preserved. This is achieved by appropriately evaluating the Jacobian values associated with each cell. Accurate computation of contravariant velocities for unsteady flows using the momentum interpolation method on collocated, curvilinear grids was also addressed. Flutter computations were performed for the AGARD 445.6 wing at subsonic, transonic and supersonic Mach numbers. Unsteady computations were performed at various dynamic pressures to predict the flutter boundary. Results showed favorable agreement of experiment and previous numerical results. The computational methodology exhibited capabilities to predict both qualitative and quantitative features of aeroelasticity.
NASA Astrophysics Data System (ADS)
Zou, Tianhao; Zuo, Zhengrong
2018-02-01
Target detection is a very important and basic problem of computer vision and image processing. The most often case we meet in real world is a detection task for a moving-small target on moving platform. The commonly used methods, such as Registration-based suppression, can hardly achieve a desired result. To crack this hard nut, we introduce a Global-local registration based suppression method. Differ from the traditional ones, the proposed Global-local Registration Strategy consider both the global consistency and the local diversity of the background, obtain a better performance than normal background suppression methods. In this paper, we first discussed the features about the small-moving target detection on unstable platform. Then we introduced a new strategy and conducted an experiment to confirm its noisy stability. In the end, we confirmed the background suppression method based on global-local registration strategy has a better perform in moving target detection on moving platform.
NASA Astrophysics Data System (ADS)
Mittelstaedt, Eric; Davaille, Anne; van Keken, Peter E.; Gracias, Nuno; Escartin, Javier
2010-10-01
Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g., rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperature-dependent viscosity. Results show that average RMS errors are ˜5%-7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck'09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ˜10°C-15°C effluent reach ˜5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2-D flows where background objects have a small spatial scale, such as sand or gravel.
Experiences of patients with multiple sclerosis from group counseling.
Mazaheri, Mina; Fanian, Nasrin; Zargham-Boroujeni, Ali
2011-01-01
Group counseling is one of the most important methods in somatic and psychological rehabilitation of the multiple sclerosis (M.S.) patients. Knowing these patients' experiences, feelings, believes and emotion based on learning in group is necessary to indicate the importance of group discussion on quality of life of the patients. This study was done to achieve experiences of M.S. patients from group training. This was a qualitative study using phenomenological method. The samples were selected using purposeful sampling. Ten patients from M.S. society who had passed group training were included in the study. The group training was done through seven sessions weekly and voluntarily. The participants were interviewed using in-depth interview. The average time of each interview was between 30-50 minutes which has been recorded digitally and moved to a compact disc to transcribe and analysis. The data analyzed using 7-step Colaizzi method. The data were transformed into 158 codes, 12 sub-concepts and 4 main concepts including emotional consequences, communication, quality of life and needs. M.S can lead to multiple problems in patients such as somatic, behavioral, emotional and social disorders. Group psychotherapy is one of the methods which can decrease these problems and improve rehabilitation of the patients. Group discussion helps patients to overcome adverse feelings, behaviors and thoughts and guides them to move in a meaningful life. It also can improve quality of life and mental health of the patients.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
NASA Astrophysics Data System (ADS)
Russell, John L.; Campbell, John L.; Boyd, Nicholas I.; Dias, Johnny F.
2018-02-01
The newly developed GUMAP software creates element maps from OMDAQ list mode files, displays these maps individually or collectively, and facilitates on-screen definitions of specified regions from which a PIXE spectrum can be built. These include a free-hand region defined by moving the cursor. The regional charge is entered automatically into the spectrum file in a new GUPIXWIN-compatible format, enabling a GUPIXWIN analysis of the spectrum. The code defaults to the OMDAQ dead time treatment but also facilitates two other methods for dead time correction in sample regions with count rates different from the average.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ge, Y; Keall, P; Poulsen, P
Purpose: Multiple targets with large intrafraction independent motion are often involved in advanced prostate, lung, abdominal, and head and neck cancer radiotherapy. Current standard of care treats these with the originally planned fields, jeopardizing the treatment outcomes. A real-time multi-leaf collimator (MLC) tracking method has been developed to address this problem for the first time. This study evaluates the geometric uncertainty of the multi-target tracking method. Methods: Four treatment scenarios are simulated based on a prostate IMAT plan to treat a moving prostate target and static pelvic node target: 1) real-time multi-target MLC tracking; 2) real-time prostate-only MLC tracking; 3)more » correcting for prostate interfraction motion at setup only; and 4) no motion correction. The geometric uncertainty of the treatment is assessed by the sum of the erroneously underexposed target area and overexposed healthy tissue areas for each individual target. Two patient-measured prostate trajectories of average 2 and 5 mm motion magnitude are used for simulations. Results: Real-time multi-target tracking accumulates the least uncertainty overall. As expected, it covers the static nodes similarly well as no motion correction treatment and covers the moving prostate similarly well as the real-time prostate-only tracking. Multi-target tracking reduces >90% of uncertainty for the static nodal target compared to the real-time prostate-only tracking or interfraction motion correction. For prostate target, depending on the motion trajectory which affects the uncertainty due to leaf-fitting, multi-target tracking may or may not perform better than correcting for interfraction prostate motion by shifting patient at setup, but it reduces ∼50% of uncertainty compared to no motion correction. Conclusion: The developed real-time multi-target MLC tracking can adapt for the independently moving targets better than other available treatment adaptations. This will enable PTV margin reduction to minimize health tissue toxicity while remain tumor coverage when treating advanced disease with independently moving targets involved. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less
NASA Astrophysics Data System (ADS)
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-11-01
Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.
Deep learning architecture for air quality predictions.
Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe
2016-11-01
With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.
The Hurst exponent in energy futures prices
NASA Astrophysics Data System (ADS)
Serletis, Apostolos; Rosenberg, Aryeh Adam
2007-07-01
This paper extends the work in Elder and Serletis [Long memory in energy futures prices, Rev. Financial Econ., forthcoming, 2007] and Serletis et al. [Detrended fluctuation analysis of the US stock market, Int. J. Bifurcation Chaos, forthcoming, 2007] by re-examining the empirical evidence for random walk type behavior in energy futures prices. In doing so, it uses daily data on energy futures traded on the New York Mercantile Exchange, over the period from July 2, 1990 to November 1, 2006, and a statistical physics approach-the ‘detrending moving average’ technique-providing a reliable framework for testing the information efficiency in financial markets as shown by Alessio et al. [Second-order moving average and scaling of stochastic time series, Eur. Phys. J. B 27 (2002) 197-200] and Carbone et al. [Time-dependent hurst exponent in financial time series. Physica A 344 (2004) 267-271; Analysis of clusters formed by the moving average of a long-range correlated time series. Phys. Rev. E 69 (2004) 026105]. The results show that energy futures returns display long memory and that the particular form of long memory is anti-persistence.
Environmental Assessment: Installation Development at Sheppard Air Force Base, Texas
2007-05-01
column, or in topographic depressions. Water is then utilized by plants and is respired, or it moves slowly into groundwater and/or eventually to surface...water bodies where it slowly moves through the hydrologic cycle. Removal of vegetation decreases infiltration into the soil column and thereby...School District JP-4 jet propulsion fuel 4 kts knots Ldn Day- Night Average Sound Level Leq equivalent noise level Lmax maximum sound level lb pound
Solute transport and storage mechanisms in wetlands of the Everglades, south Florida
Harvey, Judson W.; Saiers, James E.; Newlin, Jessica T.
2005-01-01
Solute transport and storage processes in wetlands play an important role in biogeochemical cycling and in wetland water quality functions. In the wetlands of the Everglades, there are few data or guidelines to characterize transport through the heterogeneous flow environment. Our goal was to conduct a tracer study to help quantify solute exchange between the relatively fast flowing water in the open part of the water column and much more slowly moving water in thick floating vegetation and in the pore water of the underlying peat. We performed a tracer experiment that consisted of a constant‐rate injection of a sodium bromide (NaBr) solution for 22 hours into a 3 m wide, open‐ended flume channel in Everglades National Park. Arrival of the bromide tracer was monitored at an array of surface water and subsurface samplers for 48 hours at a distance of 6.8 m downstream of the injection. A one‐dimensional transport model was used in combination with an optimization code to identify the values of transport parameters that best explained the tracer observations. Parameters included dimensions and mass transfer coefficients describing exchange with both short (hours) and longer (tens of hours) storage zones as well as the average rates of advection and longitudinal dispersion in the open part of the water column (referred to as the “main flow zone”). Comparison with a more detailed set of tracer measurements tested how well the model's storage zones approximated the average characteristics of tracer movement into and out of the layer of thick floating vegetation and the pore water in the underlying peat. The rate at which the relatively fast moving water in the open water column was exchanged with slowly moving water in the layer of floating vegetation and in sediment pore water amounted to 50 and 3% h−1, respectively. Storage processes decreased the depth‐averaged velocity of surface water by 50% relative to the water velocity in the open part of the water column. As a result, flow measurements made with other methods that only work in the open part of the water column (e.g., acoustic Doppler) would have overestimated the true depth‐averaged velocity by a factor of 2. We hypothesize that solute exchange and storage in zones of floating vegetation and peat pore water increase contact time of solutes with biogeochemically active surfaces in this heterogeneous wetland environment.
NASA Technical Reports Server (NTRS)
Pina, J. F.; House, F. B.
1976-01-01
A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.
Pauler, Denise K; Kendrick, Brian K
2004-01-08
The de Broglie-Bohm hydrodynamic equations of motion are solved using a meshless method based on a moving least squares approach and an arbitrary Lagrangian-Eulerian frame of reference. A regridding algorithm adds and deletes computational points as needed in order to maintain a uniform interparticle spacing, and unitary time evolution is obtained by propagating the wave packet using averaged fields. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. The methodology is applied to a two-dimensional model collinear reaction with an activation barrier. Reaction probabilities are computed as a function of both time and energy, and are in excellent agreement with those based on the quantum trajectory method. (c) 2004 American Institute of Physics
NASA Astrophysics Data System (ADS)
Ni, X. Y.; Huang, H.; Du, W. P.
2017-02-01
The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao
2013-03-01
We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
Air quality at night markets in Taiwan.
Zhao, Ping; Lin, Chi-Chi
2010-03-01
In Taiwan, there are more than 300 night markets and they have attracted more and more visitors in recent years. Air quality in night markets has become a public concern. To characterize the current air quality in night markets, four major night markets in Kaohsiung were selected for this study. The results of this study showed that the mean carbon dioxide (CO2) concentrations at fixed and moving sites in night markets ranged from 326 to 427 parts per million (ppm) during non-open hours and from 433 to 916 ppm during open hours. The average carbon monoxide (CO) concentrations at fixed and moving sites in night markets ranged from 0.2 to 2.8 ppm during non-open hours and from 2.1 to 14.1 ppm during open hours. The average 1-hr levels of particulate matter with aerodynamic diameters less than 10 microm (PM10) and less than 2.5 microm (PM2.5) at fixed and moving sites in night markets were high, ranging from 186 to 451 microg/m3 and from 175 to 418 microg/m3, respectively. The levels of PM2.5 accounted for 80-97% of their respective PM10 concentrations. The average formaldehyde (HCHO) concentrations at fixed and moving sites in night markets ranged from 0 to 0.05 ppm during non-open hours and from 0.02 to 0.27 ppm during open hours. The average concentration of individual polycyclic aromatic hydrocarbons (PAHs) was found in the range of 0.09 x 10(4) to 1.8 x 10(4) ng/m3. The total identified PAHs (TIPs) ranged from 7.8 x 10(1) to 20 x 10(1) ng/m3 during non-open hours and from 1.5 x 10(4) to 4.0 x 10(4) ng/m3 during open hours. Of the total analyzed PAHs, the low-molecular-weight PAHs (two to three rings) were the dominant species, corresponding to an average of 97% during non-open hours and 88% during open hours, whereas high-molecular-weight PAHs (four to six rings) represented 3 and 12% of the total detected PAHs in the gas phase during non-open and open hours, respectively.
Feng, Xue; Cai, Yan-Cong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Wu, Jia-Bing; Yuan, Feng-Hui
2014-10-01
Based on the meteorological and hydrological data from 1970 to 2006, the advection-aridity (AA) model with calibrated parameters was used to calculate evapotranspiration in the Hun-Taizi River Basin in Northeast China. The original parameter of the AA model was tuned according to the water balance method and then four subbasins were selected to validate. Spatiotemporal variation characteristics of evapotranspiration and related affecting factors were analyzed using the methods of linear trend analysis, moving average, kriging interpolation and sensitivity analysis. The results showed that the empirical parameter value of 0.75 of AA model was suitable for the Hun-Taizi River Basin with an error of 11.4%. In the Hun-Taizi River Basin, the average annual actual evapotranspiration was 347.4 mm, which had a slightly upward trend with a rate of 1.58 mm · (10 a(-1)), but did not change significantly. It also indicated that the annual actual evapotranspiration presented a single-peaked pattern and its peak value occurred in July; the evapotranspiration in summer was higher than in spring and autumn, and it was the smallest in winter. The annual average evapotranspiration showed a decreasing trend from the northwest to the southeast in the Hun-Taizi River Basin from 1970 to 2006 with minor differences. Net radiation was largely responsible for the change of actual evapotranspiration in the Hun-Taizi River Basin.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
Kwon, Sangil; Park, Yonghee; Park, Junhong; Kim, Jeongsoo; Choi, Kwang-Ho; Cha, Jun-Seok
2017-01-15
This paper presents the on-road nitrogen oxides (NO x ) emissions measurements from Euro 6 light-duty diesel vehicles using a portable emissions measurement system on the predesigned test routes in the metropolitan area of Seoul, Korea. Six diesel vehicles were tested and the NO x emissions results were analyzed according to the driving routes, driving conditions, data analysis methods, and ambient temperatures. Total NO x emissions for route 1, which has higher driving severity than route 2, differed by -4-60% from those for route 2. The NO x emissions when the air conditioner (AC) was used were higher by 68% and 85%, on average, for routes 1 and 2, respectively, compared to when the AC was not used. The analytical results for NO x emissions by the moving averaging window method were higher by 2-31% compared to the power binning method. NO x emissions at lower ambient temperatures (0-5°C) were higher by 82-192% compared to those at higher ambient temperatures (15-20°C). This result shows that performance improvements of exhaust gas recirculation and the NO x after-treatment system will be needed at lower ambient temperatures. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.
2018-03-01
Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.
Cavity Resonator Wireless Power Transfer System for Freely Moving Animal Experiments.
Mei, Henry; Thackston, Kyle A; Bercich, Rebecca A; Jefferys, John G R; Irazoqui, Pedro P
2017-04-01
The goal of this paper is to create a large wireless powering arena for powering small devices implanted in freely behaving rodents. We design a cavity resonator based wireless power transfer (WPT) system and utilize our previously developed optimal impedance matching methodology to achieve effective WPT performance for operating sophisticated implantable devices, made with miniature receive coils (<8 mm in diameter), within a large volume (dimensions: 60.96 cm × 60.96 cm × 30 cm). We provide unique cavity design and construction methods which maintains electromagnetic performance of the cavity while promoting its utility as a large animal husbandry environment. In addition, we develop a biaxial receive resonator system to address device orientation insensitivity within the cavity environment. Functionality is demonstrated with chronic experiments involving rats implanted with our custom designed bioelectric recording device. We demonstrate an average powering fidelity of 93.53% over nine recording sessions across nine weeks, indicating nearly continuous device operation for a freely behaving rat within the large cavity resonator space. We have developed and demonstrated a cavity resonator based WPT system for long term experiments involving freely behaving small animals. This cavity resonator based WPT system offers an effective and simple method for wirelessly powering miniaturized devices implanted in freely moving small animals within the largest space.
NASA Astrophysics Data System (ADS)
Waghorn, Ben J.; Shah, Amish P.; Ngwa, Wilfred; Meeks, Sanford L.; Moore, Joseph A.; Siebers, Jeffrey V.; Langen, Katja M.
2010-07-01
Intra-fraction organ motion during intensity-modulated radiation therapy (IMRT) treatment can cause differences between the planned and the delivered dose distribution. To investigate the extent of these dosimetric changes, a computational model was developed and validated. The computational method allows for calculation of the rigid motion perturbed three-dimensional dose distribution in the CT volume and therefore a dose volume histogram-based assessment of the dosimetric impact of intra-fraction motion on a rigidly moving body. The method was developed and validated for both step-and-shoot IMRT and solid compensator IMRT treatment plans. For each segment (or beam), fluence maps were exported from the treatment planning system. Fluence maps were shifted according to the target position deduced from a motion track. These shifted, motion-encoded fluence maps were then re-imported into the treatment planning system and were used to calculate the motion-encoded dose distribution. To validate the accuracy of the motion-encoded dose distribution the treatment plan was delivered to a moving cylindrical phantom using a programmed four-dimensional motion phantom. Extended dose response (EDR-2) film was used to measure a planar dose distribution for comparison with the calculated motion-encoded distribution using a gamma index analysis (3% dose difference, 3 mm distance-to-agreement). A series of motion tracks incorporating both inter-beam step-function shifts and continuous sinusoidal motion were tested. The method was shown to accurately predict the film's dose distribution for all of the tested motion tracks, both for the step-and-shoot IMRT and compensator plans. The average gamma analysis pass rate for the measured dose distribution with respect to the calculated motion-encoded distribution was 98.3 ± 0.7%. For static delivery the average film-to-calculation pass rate was 98.7 ± 0.2%. In summary, a computational technique has been developed to calculate the dosimetric effect of intra-fraction motion. This technique has the potential to evaluate a given plan's sensitivity to anticipated organ motion. With knowledge of the organ's motion it can also be used as a tool to assess the impact of measured intra-fraction motion after dose delivery.
Khani, Mohammadreza; Xing, Tao; Gibbs, Christina; Oshinski, John N; Stewart, Gregory R; Zeller, Jillynne R; Martin, Bryn A
2017-08-01
A detailed quantification and understanding of cerebrospinal fluid (CSF) dynamics may improve detection and treatment of central nervous system (CNS) diseases and help optimize CSF system-based delivery of CNS therapeutics. This study presents a computational fluid dynamics (CFD) model that utilizes a nonuniform moving boundary approach to accurately reproduce the nonuniform distribution of CSF flow along the spinal subarachnoid space (SAS) of a single cynomolgus monkey. A magnetic resonance imaging (MRI) protocol was developed and applied to quantify subject-specific CSF space geometry and flow and define the CFD domain and boundary conditions. An algorithm was implemented to reproduce the axial distribution of unsteady CSF flow by nonuniform deformation of the dura surface. Results showed that maximum difference between the MRI measurements and CFD simulation of CSF flow rates was <3.6%. CSF flow along the entire spine was laminar with a peak Reynolds number of ∼150 and average Womersley number of ∼5.4. Maximum CSF flow rate was present at the C4-C5 vertebral level. Deformation of the dura ranged up to a maximum of 134 μm. Geometric analysis indicated that total spinal CSF space volume was ∼8.7 ml. Average hydraulic diameter, wetted perimeter, and SAS area were 2.9 mm, 37.3 mm and 27.24 mm2, respectively. CSF pulse wave velocity (PWV) along the spine was quantified to be 1.2 m/s.
NASA Astrophysics Data System (ADS)
Meng, Haoran; Ben-Zion, Yehuda
2018-01-01
We present a technique to detect small earthquakes not included in standard catalogues using data from a dense seismic array. The technique is illustrated with continuous waveforms recorded in a test day by 1108 vertical geophones in a tight array on the San Jacinto fault zone. Waveforms are first stacked without time-shift in nine non-overlapping subarrays to increase the signal-to-noise ratio. The nine envelope functions of the stacked records are then multiplied with each other to suppress signals associated with sources affecting only some of the nine subarrays. Running a short-term moving average/long-term moving average (STA/LTA) detection algorithm on the product leads to 723 triggers in the test day. Using a local P-wave velocity model derived for the surface layer from Betsy gunshot data, 5 s long waveforms of all sensors around each STA/LTA trigger are beamformed for various incident directions. Of the 723 triggers, 220 are found to have localized energy sources and 103 of these are confirmed as earthquakes by verifying their observation at 4 or more stations of the regional seismic network. This demonstrates the general validity of the method and allows processing further the validated events using standard techniques. The number of validated events in the test day is >5 times larger than that in the standard catalogue. Using these events as templates can lead to additional detections of many more earthquakes.
Detection of Moving Targets Using Soliton Resonance Effect
NASA Technical Reports Server (NTRS)
Kulikov, Igor K.; Zak, Michail
2013-01-01
The objective of this research was to develop a fundamentally new method for detecting hidden moving targets within noisy and cluttered data-streams using a novel "soliton resonance" effect in nonlinear dynamical systems. The technique uses an inhomogeneous Korteweg de Vries (KdV) equation containing moving-target information. Solution of the KdV equation will describe a soliton propagating with the same kinematic characteristics as the target. The approach uses the time-dependent data stream obtained with a sensor in form of the "forcing function," which is incorporated in an inhomogeneous KdV equation. When a hidden moving target (which in many ways resembles a soliton) encounters the natural "probe" soliton solution of the KdV equation, a strong resonance phenomenon results that makes the location and motion of the target apparent. Soliton resonance method will amplify the moving target signal, suppressing the noise. The method will be a very effective tool for locating and identifying diverse, highly dynamic targets with ill-defined characteristics in a noisy environment. The soliton resonance method for the detection of moving targets was developed in one and two dimensions. Computer simulations proved that the method could be used for detection of singe point-like targets moving with constant velocities and accelerations in 1D and along straight lines or curved trajectories in 2D. The method also allows estimation of the kinematic characteristics of moving targets, and reconstruction of target trajectories in 2D. The method could be very effective for target detection in the presence of clutter and for the case of target obscurations.
A Moving Discontinuous Galerkin Finite Element Method for Flows with Interfaces
2017-12-07
Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6040--17-9765 A Moving Discontinuous Galerkin Finite Element Method for Flows with...guidance to revise the method to ensure such properties. Acknowledgements This work was sponsored by the Office of Naval Research through the Naval...18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT A Moving Discontinuous Galerkin Finite Element Method for Flows with Interfaces Andrew Corrigan, Andrew
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yuting; Liu, Tian; Yang, Xiaofeng
2013-10-01
Purpose: The objective of this work is to characterize and quantify the impact of respiratory-induced prostate motion. Methods and Materials: Real-time intrafraction motion is observed with the Calypso 4-dimensional nonradioactive electromagnetic tracking system (Calypso Medical Technologies, Inc. Seattle, Washington). We report the results from a total of 1024 fractions from 31 prostate cancer patients. Wavelet transform was used to decompose the signal to extract and isolate the respiratory-induced prostate motion from the total prostate displacement. Results: Our results show that the average respiratory motion larger than 0.5 mm can be observed in 68% of the fractions. Fewer than 1% ofmore » the patients showed average respiratory motion of less than 0.2 mm, whereas 99% of the patients showed average respiratory-induced motion ranging between 0.2 and 2 mm. The maximum respiratory range of motion of 3 mm or greater was seen in only 25% of the fractions. In addition, about 2% patients showed anxiety, indicated by a breathing frequency above 24 times per minute. Conclusions: Prostate motion is influenced by respiration in most fractions. Real-time intrafraction data are sensitive enough to measure the impact of respiration by use of wavelet decomposition methods. Although the average respiratory amplitude observed in this study is small, this technique provides a tool that can be useful if one moves to smaller treatment margins (≤5 mm). This also opens ups the possibility of being able to develop patient specific margins, knowing that prostate motion is not unpredictable.« less
Rate of Oviposition by Culex Quinquefasciatus in San Antonio, Texas, During Three Years
1988-09-01
autoregression and zero orders of integration and moving average ( ARIMA (l,O,O)). This model was chosen initially because rainfall ap- peared to...have no trend requiring integration and no obvious requirement for a moving aver- age component (i.e., no regular periodicity). This ARIMA model was...Say in both the northern and southern hem- ispheres exposes this species to a variety of climatic challenges to its survival. It is able to adjust
1983-11-01
S-Approximate Household inventory item average chance of being moved (%) High Electric toaster Vacuum cleaner 80 Colour television Medium Record...most rtadily moved are small items of electrical. I equipment and valuable items such as colour televisions. However, many respondents reported that...WESSEX WATER AUTHORITY, "Somerset Land Drainage District, land drainage sur ey report", Wessex Water Authority, Bridgwater, England, 1979. .34 "* • I.U
ERIC Educational Resources Information Center
Garnett, Bernice R.; Becker, Kelly; Vierling, Danielle; Gleason, Cara; DiCenzo, Danielle; Mongeon, Louise
2017-01-01
Objective: Less than half of young people in the USA are meeting the daily physical activity requirements of at least 60 minutes of moderate or vigorous physical activity. A mixed-methods pilot feasibility assessment of "Move it Move it!" was conducted in the Spring of 2014 to assess the impact of a before-school physical activity…
Plans, Patterns, and Move Categories Guiding a Highly Selective Search
NASA Astrophysics Data System (ADS)
Trippen, Gerhard
In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
A Generation at Risk: When the Baby Boomers Reach Golden Pond.
ERIC Educational Resources Information Center
Butler, Robert N.
The 20th century has seen average life expectancy in the United States move from under 50 years to over 70 years. Coupled with this increase in average life expectancy is the aging of the 76.4 million persons born between 1946 and 1964. As they approach retirement, these baby-boomers will have to balance their own needs with those of living…
Comparison of modal superposition methods for the analytical solution to moving load problems.
DOT National Transportation Integrated Search
1994-01-01
The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...
Prediction Methods in Solar Sunspots Cycles
Ng, Kim Kwee
2016-01-01
An understanding of the Ohl’s Precursor Method, which is used to predict the upcoming sunspots activity, is presented by employing a simplified movable divided-blocks diagram. Using a new approach, the total number of sunspots in a solar cycle and the maximum averaged monthly sunspots number Rz(max) are both shown to be statistically related to the geomagnetic activity index in the prior solar cycle. The correlation factors are significant and they are respectively found to be 0.91 ± 0.13 and 0.85 ± 0.17. The projected result is consistent with the current observation of solar cycle 24 which appears to have attained at least Rz(max) at 78.7 ± 11.7 in March 2014. Moreover, in a statistical study of the time-delayed solar events, the average time between the peak in the monthly geomagnetic index and the peak in the monthly sunspots numbers in the succeeding ascending phase of the sunspot activity is found to be 57.6 ± 3.1 months. The statistically determined time-delayed interval confirms earlier observational results by others that the Sun’s electromagnetic dipole is moving toward the Sun’s Equator during a solar cycle. PMID:26868269
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Characteristic correlation study of UV disinfection performance for ballast water treatment
NASA Astrophysics Data System (ADS)
Ba, Te; Li, Hongying; Osman, Hafiiz; Kang, Chang-Wei
2016-11-01
Characteristic correlation between ultraviolet disinfection performance and operating parameters, including ultraviolet transmittance (UVT), lamp power and water flow rate, was studied by numerical and experimental methods. A three-stage model was developed to simulate the fluid flow, UV radiation and the trajectories of microorganisms. Navier-Stokes equation with k-epsilon turbulence was solved to model the fluid flow, while discrete ordinates (DO) radiation model and discrete phase model (DPM) were used to introduce UV radiation and microorganisms trajectories into the model, respectively. The UV dose statistical distribution for the microorganisms was found to move to higher value with the increase of UVT and lamp power, but moves to lower value when the water flow rate increases. Further investigation shows that the fluence rate increases exponentially with UVT but linearly with the lamp power. The average and minimum resident time decreases linearly with the water flow rate while the maximum resident time decrease rapidly in a certain range. The current study can be used as a digital design and performance evaluation tool of the UV reactor for ballast water treatment.
NASA Astrophysics Data System (ADS)
Alizadeh, As'ad; Dadvand, Abdolrahman
2018-02-01
In this paper, the motion of high deformable (healthy) and low deformable (sick) red blood cells in a microvessel with and without stenosis is simulated using a combined lattice Boltzmann-immersed boundary method. The RBC is considered as neo-Hookean elastic membrane with bending resistance. The motion and deformation of the RBC under different values of the Reynolds number are evaluated. In addition, the variations of blood flow resistance and time-averaged pressure due to the motion and deformation of the RBC are assessed. It was found that a healthy RBC moves faster than a sick one. The apparent viscosity and blood flow resistance are greater for the case involving the sick RBC. Blood pressure at the presence of stenosis and low deformable RBC increases, which is thought of as the reason of many serious diseases including cardiovascular diseases. As the Re number increases, the RBC deforms further and moves easier and faster through the stenosis. The results of this study were compared to the available experimental and numerical results, and good agreements were observed.
Discrete approach to stochastic parametrization and dimension reduction in nonlinear dynamics.
Chorin, Alexandre J; Lu, Fei
2015-08-11
Many physical systems are described by nonlinear differential equations that are too complicated to solve in full. A natural way to proceed is to divide the variables into those that are of direct interest and those that are not, formulate solvable approximate equations for the variables of greater interest, and use data and statistical methods to account for the impact of the other variables. In the present paper we consider time-dependent problems and introduce a fully discrete solution method, which simplifies both the analysis of the data and the numerical algorithms. The resulting time series are identified by a NARMAX (nonlinear autoregression moving average with exogenous input) representation familiar from engineering practice. The connections with the Mori-Zwanzig formalism of statistical physics are discussed, as well as an application to the Lorenz 96 system.
Statistical physics in foreign exchange currency and stock markets
NASA Astrophysics Data System (ADS)
Ausloos, M.
2000-09-01
Problems in economy and finance have attracted the interest of statistical physicists all over the world. Fundamental problems pertain to the existence or not of long-, medium- or/and short-range power-law correlations in various economic systems, to the presence of financial cycles and on economic considerations, including economic policy. A method like the detrended fluctuation analysis is recalled emphasizing its value in sorting out correlation ranges, thereby leading to predictability at short horizon. The ( m, k)-Zipf method is presented for sorting out short-range correlations in the sign and amplitude of the fluctuations. A well-known financial analysis technique, the so-called moving average, is shown to raise questions to physicists about fractional Brownian motion properties. Among spectacular results, the possibility of crash predictions has been demonstrated through the log-periodicity of financial index oscillations.
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Fan, Li-Juan; Yang, Xiao-Feng; Chen, Yan-Yan
2008-09-01
The rolling massage manipulation is a classic Chinese massage, which is expected to eliminate many diseases. Here the effect of the rolling massage on the particle moving property in the blood vessels under the rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulation results show that the particle moving behaviour depends on the rolling velocity, the distance between particle position and rolling position. The average values, including particle translational velocity and angular velocity, increase as the rolling velocity increases almost linearly. The result is helpful to understand the mechanism of the massage and develop the rolling techniques.
NASA Astrophysics Data System (ADS)
Salari, Mahmoud; Rava, Amin
2017-09-01
Nowadays, Autonomous Underwater Vehicles (AUVs) are frequently used for exploring the oceans. The hydrodynamics of AUVs moving in the vicinity of the water surface are significantly different at higher depths. In this paper, the hydrodynamic coefficients of an AUV in non-dimensional depths of 0.75, 1, 1.5, 2, and 4D are obtained for movement close to the free-surface. Reynolds Averaged Navier Stokes Equations (RANS) are discretized using the finite volume approach and the water-surface effects modeled using the Volume of Fraction (VOF) method. As the operating speeds of AUVs are usually low, the boundary layer over them is not fully laminar or fully turbulent, so the effect of boundary layer transition from laminar to turbulent flow was considered in the simulations. Two different turbulence/transition models were used: 1) a full-turbulence model, the k-ɛ model, and 2) a turbulence/transition model, Menter's Transition-SST model. The results show that the Menter's Transition-SST model has a better consistency with experimental results. In addition, the wave-making effects of these bodies are studied at different immersion depths in the sea-surface vicinity or at finite depths. It is observed that the relevant pitch moments and lift coefficients are non-zero for these axi-symmetric bodies when they move close to the sea-surface. This is not expected for greater depths.
Neonatal heart rate prediction.
Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth
2009-01-01
Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.
Structural equation modeling of the inflammatory response to traffic air pollution
Baja, Emmanuel S.; Schwartz, Joel D.; Coull, Brent A.; Wellenius, Gregory A.; Vokonas, Pantel S.; Suh, Helen H.
2015-01-01
Several epidemiological studies have reported conflicting results on the effect of traffic-related pollutants on markers of inflammation. In a Bayesian framework, we examined the effect of traffic pollution on inflammation using structural equation models (SEMs). We studied measurements of C-reactive protein (CRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble intracellular adhesion molecule-1 (sICAM-1) for 749 elderly men from the Normative Aging Study. Using repeated measures SEMs, we fit a latent variable for traffic pollution that is reflected by levels of black carbon, carbon monoxide, nitrogen monoxide and nitrogen dioxide to estimate its effect on a latent variable for inflammation that included sICAM-1, sVCAM-1 and CRP. Exposure periods were assessed using 1-, 2-, 3-, 7-, 14- and 30-day moving averages previsit. We compared our findings using SEMs with those obtained using linear mixed models. Traffic pollution was related to increased inflammation for 3-, 7-, 14- and 30-day exposure periods. An inter-quartile range increase in traffic pollution was associated with a 2.3% (95% posterior interval (PI): 0.0–4.7%) increase in inflammation for the 3-day moving average, with the most significant association observed for the 30-day moving average (23.9%; 95% PI: 13.9–36.7%). Traffic pollution adversely impacts inflammation in the elderly. SEMs in a Bayesian framework can comprehensively incorporate multiple pollutants and health outcomes simultaneously in air pollution–cardiovascular epidemiological studies. PMID:23232970
Park, Yoonah; Yong, Yuen Geng; Yun, Seong Hyeon; Jung, Kyung Uk; Huh, Jung Wook; Cho, Yong Beom; Kim, Hee Cheol; Lee, Woo Yong; Chun, Ho-Kyung
2015-05-01
This study aimed to compare the learning curves and early postoperative outcomes for conventional laparoscopic (CL) and single incision laparoscopic (SIL) right hemicolectomy (RHC). This retrospective study included the initial 35 cases in each group. Learning curves were evaluated by the moving average of operative time, mean operative time of every five consecutive cases, and cumulative sum (CUSUM) analysis. The learning phase was considered overcome when the moving average of operative times reached a plateau, and when the mean operative time of every five consecutive cases reached a low point and subsequently did not vary by more than 30 minutes. Six patients with missing data in the CL RHC group were excluded from the analyses. According to the mean operative time of every five consecutive cases, learning phase of SIL and CL RHC was completed between 26 and 30 cases, and 16 and 20 cases, respectively. Moving average analysis revealed that approximately 31 (SIL) and 25 (CL) cases were needed to complete the learning phase, respectively. CUSUM analysis demonstrated that 10 (SIL) and two (CL) cases were required to reach a steady state of complication-free performance, respectively. Postoperative complications rate was higher in SIL than in CL group, but the difference was not statistically significant (17.1% vs. 3.4%). The learning phase of SIL RHC is longer than that of CL RHC. Early oncological outcomes of both techniques were comparable. However, SIL RHC had a statistically insignificant higher complication rate than CL RHC during the learning phase.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)
2000-01-01
Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Direct measurement of Lorentz transformation with Doppler effects
NASA Astrophysics Data System (ADS)
Chen, Shao-Guang
For space science and astronomy the fundamentality of one-way velocity of light (OWVL) is selfevident. The measurement of OWVL (distance/interval) and the clock synchronization with light-signal transfer make a logical circulation. This means that OWVL could not be directly measured but only come indirectly from astronomical method (Romer's Io eclipse and Bradley's sidereal aberration), furthermore, the light-year by definitional OWVL and the trigonometry distance with AU are also un-measurable. For to solve this problem two methods of clock synchronization were proposed: The direct method is that at one end of dual-speed transmissionline with single clock measure the arriving-time difference of longitudinal wave and transverse wave or ordinary light and extraordinary light, again to calculate the collective sending-time of two wave with Yang's /shear elastic-modulus ratio (E/k) or extraordinary/ordinary light refractive-index ratio (ne/no), which work as one earthquake-station with single clock measures first-shake time and the distance to epicenter; The indirect method is that the one-way wavelength l is measured by dual-counters Ca and Cb and computer's real-time operation of reading difference (Nb - Na) of two counters, the frequency f is also simultaneously measured, then l f is just OWVL. Therefore, with classical Newtonian mechanics and ether wave optics, OWVL can be measured in the Galileo coordinate system with an isotropic length unit (1889 international meter definition). Without any hypotheses special relativity can entirely establish on the metrical results. When a certain wavelength l is defined as length unit, foregoing measurement of one-way wavelength l will become as the measurement of rod's length. Let a rigidity-rod connecting Ca and Cb moves relative to lamp-house with velocity v, rod's length L = (Nb - Na) l will change follow v by known Doppler effect, i.e., L(q) =L0 (1+ (v/c) cos q), where L0 is the proper length when v= 0, v• r = v cos q, r is the unit vector from lamphouse point to counters. Or: L (0) L (pi) =L0 (1+(v/c)) L0 (1 - (v/c)) =L0 2 y2 =L2 Or: L ≡ [L(0)L(pi)]1/2 =L0 y , which y ≡ (1 - (v/c)2 )1/2 is just Fitzgerald-Lorentzian contraction-factor. Also, when a light-wave period p is defined as time unit, from Doppler's frequency-shift the count N with p of one period T of moving-clock is: T(q) = N(q) p = T0 /(1+(v/c) cos q) Or: T ≡ (T(0) T(pi))1/2 = T 0 /y , where T0 is the proper period when v = 0, which is just the moving-clock-slower effect. Let r from clock point to lamp-house ((v/c) symbol reverse), Doppler formula in the usual form is: f (q) = 1/T(q) = f0 (1 - (v/c) cos q). Therefore, Lorentz transformation is the square root average of positive and negative directions twice metrical results of Doppler's frequency-shift, which Doppler's once items ( positive and negative v/c ) are counteract only residual twice item (v/c)2 (relativity-factor). Then Lorentz transformation can be directly measured by Doppler's frequency-shift method. The half-life of moving mu-meson is statistical average of many particles, the usual explanation using relativity-factor y is correct. An airship moving simultaneously along contrary directions is impossible, which makes that the relativity-factor y and the twin-paradox are inexistent in the macroscopical movement. Thereby, in the navigations of airship or satellite only use the measurement of Doppler's frequency-shift but have no use for Lorentz transformation.
Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System
García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704
Complete vision-based traffic sign recognition supported by an I2V communication system.
García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
NASA Astrophysics Data System (ADS)
Jaman, Md. Shah; Islam, Showmic; Saha, Sumon; Hasan, Mohammad Nasim; Islam, Md. Quamrul
2016-07-01
A numerical analysis is carried out to study the performance of steady laminar mixed convection flow inside a square lid-driven cavity filled with water-Al2O3 nanofluid. The top wall of the cavity is moving at a constant velocity and is heated by an isothermal heat source. Two-dimensional Navier-stokes equations along with the energy equations are solved using Galerkin finite element method. Results are obtained for a range of Reynolds and Grashof numbers by considering with and without the presence of nanoparticles. The parametric studies for a wide range of governing parameters in case of pure mixed convective flow show significant features of the present problem in terms of streamline and isotherm contours, average Nusselt number and average temperature profiles. The computational results indicate that the heat transfer coeffcient is strongly influenced by the above governing parameters at the pure mixed convection regime.
NASA Technical Reports Server (NTRS)
Houdeville, R.; Cousteix, J.
1979-01-01
The development of a turbulent unsteady boundary layer with a mean pressure gradient strong enough to induce separation, in order to complete the extend results obtained for the flat plate configuration is presented. The longitudinal component of the velocity is measured using constant temperature hot wire anemometer. The region where negative velocities exist is investigated with a laser Doppler velocimeter system with BRAGG cells. The boundary layer responds by forced pulsation to the perturbation of potential flow. The unsteady effects observed are very important. The average location of the zero skin friction point moves periodically at the perturbation frequency. Average velocity profiles from different instants in the cycle are compared. The existence of a logarithmic region enables a simple calculation of the maximum phase shift of the velocity in the boundary layer. An attempt of calculation by an integral method of boundary layer development is presented, up to the point where reverse flow starts appearing.
NASA Technical Reports Server (NTRS)
Forbes, T. G.; Hones, E. W., Jr.; Bame, S. J.; Asbridge, J. R.; Paschmann, G.; Sckopke, N.; Russell, C. T.
1981-01-01
From an ISEE survey of substorm dropouts and recoveries during the period February 5 to May 25, 1978, 66 timing events observed by the Los Alamos Scientific Laboratory/Max-Planck-Institut Fast Plasma Experiments were studied in detail. Near substorm onset, both the average timing velocity and the bulk flow velocity at the edge of the plasma sheet are inward, toward the center. Measured normal to the surface of the plasma sheet, the timing velocity is 23 + or - 18 km/s and the proton flow velocity is 20 + or - 8 km/s. During substorm recovery, the plasma sheet reappears moving outward with an average timing velocity of 133 + or - 31 km/s; however, the corresponding proton flow velocity is only 3 + or - 7 km/s in the same direction. It is suggested that the difference between the average timing velocity for the expansion of the plasma sheet and the plasma bulk flow perpendicular to the surface of the sheet during substorm recovery is most likely the result of surface waves moving past the position of the satellites.
The vacuum friction paradox and related puzzles
NASA Astrophysics Data System (ADS)
Barnett, Stephen M.; Sonnleitner, Matthias
2018-04-01
The frequency of light emitted by a moving source is shifted by a factor proportional to its velocity. We find that this Doppler shift requires the existence of a paradoxical effect: that a moving atom radiating in otherwise empty space feels a net or average force acing against its direction motion and proportional in magnitude to is speed. Yet there is no preferred rest frame, either in relativity or in Newtonian mechanics, so how can there be a vacuum friction force?
Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement.
Wang, Junjie; He, Xiufeng; Ferreira, Vagner G
2015-08-07
Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method.
Intake flow modeling in a four stroke diesel using KIVA3
NASA Technical Reports Server (NTRS)
Hessel, R. P.; Rutland, C. J.
1993-01-01
Intake flow for a dual intake valved diesel engine is modeled using moving valves and realistic geometries. The objectives are to obtain accurate initial conditions for combustion calculations and to provide a tool for studying intake processes. Global simulation parameters are compared with experimental results and show good agreement. The intake process shows a 30 percent difference in mass flows and average swirl in opposite directions across the two intake valves. The effect of the intake process on the flow field at the end of compression is examined. Modeling the intake flow results in swirl and turbulence characteristics that are quite different from those obtained by conventional methods in which compression stroke initial conditions are assumed.
Photonic single nonlinear-delay dynamical node for information processing
NASA Astrophysics Data System (ADS)
Ortín, Silvia; San-Martín, Daniel; Pesquera, Luis; Gutiérrez, José Manuel
2012-06-01
An electro-optical system with a delay loop based on semiconductor lasers is investigated for information processing by performing numerical simulations. This system can replace a complex network of many nonlinear elements for the implementation of Reservoir Computing. We show that a single nonlinear-delay dynamical system has the basic properties to perform as reservoir: short-term memory and separation property. The computing performance of this system is evaluated for two prediction tasks: Lorenz chaotic time series and nonlinear auto-regressive moving average (NARMA) model. We sweep the parameters of the system to find the best performance. The results achieved for the Lorenz and the NARMA-10 tasks are comparable to those obtained by other machine learning methods.
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2004-06-01
The classical technical analysis methods of financial time series based on the moving average and momentum is recalled. Illustrations use the IBM share price and Latin American (Argentinian MerVal, Brazilian Bovespa and Mexican IPC) market indices. We have also searched for scaling ranges and exponents in exchange rates between Latin American currencies ($ARS$, $CLP$, $MXP$) and other major currencies $DEM$, $GBP$, $JPY$, $USD$, and $SDR$s. We have sorted out correlations and anticorrelations of such exchange rates with respect to $DEM$, $GBP$, $JPY$ and $USD$. They indicate a very complex or speculative behavior.
Hybrid empirical mode decomposition- ARIMA for forecasting exchange rates
NASA Astrophysics Data System (ADS)
Abadan, Siti Sarah; Shabri, Ani; Ismail, Shuhaida
2015-02-01
This paper studied the forecasting of monthly Malaysian Ringgit (MYR)/ United State Dollar (USD) exchange rates using the hybrid of two methods which are the empirical model decomposition (EMD) and the autoregressive integrated moving average (ARIMA). MYR is pegged to USD during the Asian financial crisis causing the exchange rates are fixed to 3.800 from 2nd of September 1998 until 21st of July 2005. Thus, the chosen data in this paper is the post-July 2005 data, starting from August 2005 to July 2010. The comparative study using root mean square error (RMSE) and mean absolute error (MAE) showed that the EMD-ARIMA outperformed the single-ARIMA and the random walk benchmark model.
Precision measurement of electric organ discharge timing from freely moving weakly electric fish.
Jun, James J; Longtin, André; Maler, Leonard
2012-04-01
Physiological measurements from an unrestrained, untethered, and freely moving animal permit analyses of neural states correlated to naturalistic behaviors of interest. Precise and reliable remote measurements remain technically challenging due to animal movement, which perturbs the relative geometries between the animal and sensors. Pulse-type electric fish generate a train of discrete and stereotyped electric organ discharges (EOD) to sense their surroundings actively, and rapid modulation of the discharge rate occurs while free swimming in Gymnotus sp. The modulation of EOD rates is a useful indicator of the fish's central state such as resting, alertness, and learning associated with exploration. However, the EOD pulse waveforms remotely observed at a pair of dipole electrodes continuously vary as the fish swims relative to the electrodes, which biases the judgment of the actual pulse timing. To measure the EOD pulse timing more accurately, reliably, and noninvasively from a free-swimming fish, we propose a novel method based on the principles of waveform reshaping and spatial averaging. Our method is implemented using envelope extraction and multichannel summation, which is more precise and reliable compared with other widely used threshold- or peak-based methods according to the tests performed under various source-detector geometries. Using the same method, we constructed a real-time electronic pulse detector performing an additional online pulse discrimination routine to enhance further the detection reliability. Our stand-alone pulse detector performed with high temporal precision (<10 μs) and reliability (error <1 per 10(6) pulses) and permits longer recording duration by storing only event time stamps (4 bytes/pulse).
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Hydrogeology and leachate movement near two chemical-waste sites in Oswego County, New York
Anderson, H.R.; Miller, Todd S.
1986-01-01
Forty-five observation wells and test holes were installed at two chemical waste disposal sites in Oswego County, New York, to evaluate the hydrogeologic conditions and the rate and direction of leachate migration. At the site near Oswego groundwater moves northward at an average velocity of 0.4 ft/day through unconsolidated glacial deposits and discharges into White Creek and Wine Creek, which border the site and discharge to Lake Ontario. Leaking barrels by chemical wastes have contaminated the groundwater within the site, as evidenced by detection of 10 ' priority pollutant ' organic compounds, and elevated values of specific conductance, chloride, arsenic, lead, and mercury. At the site near Fulton, where 8,000 barrels of chemical wastes are buried, groundwater in the sandy surficial aquifer bordering the landfill on the south and east moves southward and eastward at an average velocity of 2.8 ft/day and discharges to Bell Creek, which discharges to the Oswego River, or moves beneath the landfill. Leachate is migrating eastward, southeastward, and southwestward, as evidenced by elevated values of specific conductance, temperature, and concentrations of several trace metals at wells east, southeast, and southwest of the site. (USGS)
The Accuracy of Talking Pedometers when Used during Free-Living: A Comparison of Four Devices
ERIC Educational Resources Information Center
Albright, Carolyn; Jerome, Gerald J.
2011-01-01
The purpose of this study was to determine the accuracy of four commercially available talking pedometers in measuring accumulated daily steps of adult participants while they moved independently. Ten young sighted adults (with an average age of 24.1 [plus or minus] 4.6 years), 10 older sighted adults (with an average age of 73 [plus or minus] 5.5…
Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
NASA Astrophysics Data System (ADS)
Wu, Yu-Jie; Lin, Guan-Wei
2017-04-01
Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Salloum, Maher; Lee, Jina
2017-07-10
KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burlaga, L. F.; Ness, N. F., E-mail: lburlagahsp@verizon.net, E-mail: nfnudel@yahoo.com
2012-04-10
We examine the relationships between the magnetic field and the radial velocity component V{sub R} observed in the heliosheath by instruments on Voyager 1 (V1). No increase in the magnetic field strength B was observed in a region where V{sub R} decreased linearly from 70 km s{sup -1} to 0 km s{sup -1} as plasma moved outward past V1. An unusually broad transition from positive to negative polarity was observed during a Almost-Equal-To 26 day interval when the heliospheric current sheet (HCS) moved below the latitude of V1 and the speed of V1 was comparable to the radial speed ofmore » the heliosheath flow. When V1 moved through a region where V{sub R} Almost-Equal-To 0 (the 'stagnation region'), B increased linearly with time by a factor of two, and the average of B was 0.14 nT. Nothing comparable to this was observed previously. The magnetic polarity was negative throughout the stagnation region for Almost-Equal-To 580 days until 2011 DOY 235, indicating that the HCS was below the latitude of V1. The average passage times of the magnetic holes and proton boundary layers were the same during 2009 and 2011, because the plasma moved past V1 during 2009 at the same speed that V1 moved through the stagnation region during 2011. The microscale fluctuations of B in the stagnation region during 2011 are qualitatively the same as those observed in the heliosheath during 2009. These results suggest that the stagnation region is a part of the heliosheath, rather than a 'transition region' associated with the heliopause.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, W; Yang, J; Beadle, B
Purpose: Endoscopic examinations are routine procedures for head-and-neck cancer patients. Our goal is to develop a method to map the recorded video to CT, providing valuable information for radiotherapy treatment planning and toxicity analysis. Methods: We map video frames to CT via virtual endoscopic images rendered at the real endoscope’s CT-space coordinates. We developed two complementary methods to find these coordinates by maximizing real-to-virtual image similarity:(1)Endoscope Tracking: moves the virtual endoscope frame-by-frame until the desired frame is reached. Utilizes prior knowledge of endoscope coordinates, but sensitive to local optima. (2)Location Search: moves the virtual endoscope along possible paths through themore » volume to find the desired frame. More robust, but more computationally expensive. We tested these methods on clay phantoms with embedded markers for point mapping and protruding bolus material for contour mapping, and we assessed them qualitatively on three patient exams. For mapped points we calculated 3D-distance errors, and for mapped contours we calculated mean absolute distances (MAD) from CT contours. Results: In phantoms, Endoscope Tracking had average point error=0.66±0.50cm and average bolus MAD=0.74±0.37cm for the first 80% of each video. After that the virtual endoscope got lost, increasing these values to 4.73±1.69cm and 4.06±0.30cm. Location Search had point error=0.49±0.44cm and MAD=0.53±0.28cm. Point errors were larger where the endoscope viewed the surface at shallow angles<10 degrees (1.38±0.62cm and 1.22±0.69cm for Endoscope Tracking and Location Search, respectively). In patients, Endoscope Tracking did not make it past the nasal cavity. However, Location Search found coordinates near the correct location for 70% of test frames. Its performance was best near the epiglottis and in the nasal cavity. Conclusion: Location Search is a robust and accurate technique to map endoscopic video to CT. Endoscope Tracking is sensitive to erratic camera motion and local optima, but could be used in conjunction with anchor points found using Location Search.« less
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
2013-01-01
Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ono, Tomohiro; Miyabe, Yuki, E-mail: miyabe@kuhp.kyoto-u.ac.jp; Yamada, Masahiro
Purpose: The Vero4DRT system has the capability for dynamic tumor-tracking (DTT) stereotactic irradiation using a unique gimbaled x-ray head. The purposes of this study were to develop DTT conformal arc irradiation and to estimate its geometric and dosimetric accuracy. Methods: The gimbaled x-ray head, supported on an O-ring gantry, was moved in the pan and tilt directions during O-ring gantry rotation. To evaluate the mechanical accuracy, the gimbaled x-ray head was moved during the gantry rotating according to input command signals without a target tracking, and a machine log analysis was performed. The difference between a command and a measuredmore » position was calculated as mechanical error. To evaluate beam-positioning accuracy, a moving phantom, which had a steel ball fixed at the center, was driven based on a sinusoidal wave (amplitude [A]: 20 mm, time period [T]: 4 s), a patient breathing motion with a regular pattern (A: 16 mm, average T: 4.5 s), and an irregular pattern (A: 7.2–23.0 mm, T: 2.3–10.0 s), and irradiated with DTT during gantry rotation. The beam-positioning error was evaluated as the difference between the centroid position of the irradiated field and the steel ball on images from an electronic portal imaging device. For dosimetric accuracy, dose distributions in static and moving targets were evaluated with DTT conformal arc irradiation. Results: The root mean squares (RMSs) of the mechanical error were up to 0.11 mm for pan motion and up to 0.14 mm for tilt motion. The RMSs of the beam-positioning error were within 0.23 mm for each pattern. The dose distribution in a moving phantom with tracking arc irradiation was in good agreement with that in static conditions. Conclusions: The gimbal positional accuracy was not degraded by gantry motion. As in the case of a fixed port, the Vero4DRT system showed adequate accuracy of DTT conformal arc irradiation.« less
The Influence of Sleep Disordered Breathing on Weight Loss in a National Weight Management Program
Janney, Carol A.; Kilbourne, Amy M.; Germain, Anne; Lai, Zongshan; Hoerster, Katherine D.; Goodrich, David E.; Klingaman, Elizabeth A.; Verchinina, Lilia; Richardson, Caroline R.
2016-01-01
Study Objective: To investigate the influence of sleep disordered breathing (SDB) on weight loss in overweight/obese veterans enrolled in MOVE!, a nationally implemented behavioral weight management program delivered by the National Veterans Health Administration health system. Methods: This observational study evaluated weight loss by SDB status in overweight/obese veterans enrolled in MOVE! from May 2008–February 2012 who had at least two MOVE! visits, baseline weight, and at least one follow-up weight (n = 84,770). SDB was defined by International Classification of Diseases, Ninth Revision, Clinical Modification codes. Primary outcome was weight change (lb) from MOVE! enrollment to 6- and 12-mo assessments. Weight change over time was modeled with repeated-measures analyses. Results: SDB was diagnosed in one-third of the cohort (n = 28,269). At baseline, veterans with SDB weighed 29 [48] lb more than those without SDB (P < 0.001). On average, veterans attended eight MOVE! visits. Weight loss patterns over time were statistically different between veterans with and without SDB (P < 0.001); veterans with SDB lost less weight (−2.5 [0.1] lb) compared to those without SDB (−3.3 [0.1] lb; P = 0.001) at 6 months. At 12 mo, veterans with SDB continued to lose weight whereas veterans without SDB started to re-gain weight. Conclusions: Veterans with sleep disordered breathing (SDB) had significantly less weight loss over time than veterans without SDB. SDB should be considered in the development and implementation of weight loss programs due to its high prevalence and negative effect on health. Citation: Janney CA, Kilbourne AM, Germain A, Lai Z, Hoerster KD, Goodrich DE, Klingaman EA, Verchinina L, Richardson CR. The influence of sleep disordered breathing on weight loss in a national weight management program. SLEEP 2016;39(1):59–65. PMID:26350475
Guevara, M; Tena, C; Soret, A; Serradell, K; Guzmán, D; Retama, A; Camacho, P; Jaimes-Palomera, M; Mediavilla, A
2017-04-15
This article describes the High-Elective Resolution Modelling Emission System for Mexico (HERMES-Mex) model, an emission processing tool developed to transform the official Mexico City Metropolitan Area (MCMA) emission inventory into hourly, gridded (up to 1km 2 ) and speciated emissions used to drive mesoscale air quality simulations with the Community Multi-scale Air Quality (CMAQ) model. The methods and ancillary information used for the spatial and temporal disaggregation and speciation of the emissions are presented and discussed. The resulting emission system is evaluated, and a case study on CO, NO 2 , O 3 , VOC and PM 2.5 concentrations is conducted to demonstrate its applicability. Moreover, resulting traffic emissions from the Mobile Source Emission Factor Model for Mexico (MOBILE6.2-Mexico) and the MOtor Vehicle Emission Simulator for Mexico (MOVES-Mexico) models are integrated in the tool to assess and compare their performance. NO x and VOC total emissions modelled are reduced by 37% and 26% in the MCMA when replacing MOBILE6.2-Mexico for MOVES-Mexico traffic emissions. In terms of air quality, the system composed by the Weather Research and Forecasting model (WRF) coupled with the HERMES-Mex and CMAQ models properly reproduces the pollutant levels and patterns measured in the MCMA. The system's performance clearly improves in urban stations with a strong influence of traffic sources when applying MOVES-Mexico emissions. Despite reducing estimations of modelled precursor emissions, O 3 peak averages are increased in the MCMA core urban area (up to 30ppb) when using MOVES-Mexico mobile emissions due to its VOC-limited regime, while concentrations in the surrounding suburban/rural areas decrease or increase depending on the meteorological conditions of the day. The results obtained suggest that the HERMES-Mex model can be used to provide model-ready emissions for air quality modelling in the MCMA. Copyright © 2017 Elsevier B.V. All rights reserved.
Canadian Rural-urban Differences in End-of-life Care Setting Transitions
Wilson, Donna M.; Thomas, Roger; Burns, Katharina Kovacs; Hewitt, Jessica A.; Jane, Osei-Waree; Sandra, Robertson
2012-01-01
Few studies have focused on the care setting transitions that occur in the last year of life. A three part mixed-methods study was conducted to gain an understanding of the number and implications or impact of care setting transitions in the last year of life for rural Canadians. Provincial health services utilization data, national online survey data, and local qualitative interview data were analyzed to gain general and specific information for consideration. Rural Albertans had significantly more healthcare setting transitions than urbanites in the last year of life (M=4.2 vs 3.3). Online family respondents reported 8 moves on average occurred for family members in the last year of life. These moves were most often identified (65%) on a likert-type scale as “very difficult,” with the free text information revealing these trips were often emotionally painful for themselves and physically painful for their ill family member. Eleven informants were then interviewed until data saturation, with constant-comparative data analysis conducted for a more in-depth understanding of rural transitions. Moving from place to place for needed care in the last year of life was identified as common and concerning for rural people and their families, with three data themes developing: (a) needed care in the last year of life is scattered across many places, (b) travelling is very difficult for terminally-ill persons and their caregivers, and (c) local rural services are minimal. These findings indicate planning is needed to avoid unnecessary end-of-life care setting transitions and to make needed moves for essential services in the last year of life less costly, stressful, and socially disruptive for rural people and their families. PMID:22980372
Intelligent transportation systems infrastructure initiative
DOT National Transportation Integrated Search
1997-01-01
The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Yang, Xiao-Feng; Wang, Cai-Feng; Li, Hua-Bing
2009-07-01
The rolling massage is one of the most important manipulations in Chinese massage, which is expected to eliminate many diseases. Here, the effect of the rolling massage on a pair of particles moving in blood vessels under rolling massage manipulation is studied by the lattice Boltzmann simulation. The simulated results show that the motion of each particle is considerably modified by the rolling massage, and it depends on the relative rolling velocity, the rolling depth, and the distance between particle position and rolling position. Both particles' translational average velocities increase almost linearly as the rolling velocity increases, and obey the same law. The increment of the average relative angular velocity for the leading particle is smaller than that of the trailing one. The result is helpful for understanding the mechanism of the massage and to further develop the rolling techniques.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Habeger, Jr., Charles C.; LaFond, Emmanuel F.; Brodeur, Pierre; Gerhardstein, Joseph P.
2002-01-01
The present invention provides a system and method to reduce motion-induced noise in the detection of ultrasonic signals in a moving sheet or body of material. An ultrasonic signal is generated in a sheet of material and a detection laser beam is moved along the surface of the material. By moving the detection laser in the same direction as the direction of movement of the sheet of material the amount of noise induced in the detection of the ultrasonic signal is reduced. The scanner is moved at approximately the same speed as the moving material. The system and method may be used for many applications, such in a paper making process or steel making process. The detection laser may be directed by a scanner. The movement of the scanner is synchronized with the anticipated arrival of the ultrasonic signal under the scanner. A photodetector may be used to determine when a ultrasonic pulse has been directed to the moving sheet of material so that the scanner may be synchronized the anticipated arrival of the ultrasonic signal.
Systematic strategies for the third industrial accident prevention plan in Korea.
Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung
2012-01-01
To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
Huang, Chiung-Shing; Harikrishnan, Pandurangan; Liao, Yu-Fang; Ko, Ellen W C; Liou, Eric J W; Chen, Philip K T
2007-05-01
To evaluate the changes in maxillary position after maxillary distraction osteogenesis in six growing children with cleft lip and palate. Retrospective, longitudinal study on maxillary changes at A point, anterior nasal spine, posterior nasal spine, central incisor, and first molar. The University Hospital Craniofacial Center. Cephalometric radiographs were used to measure the maxillary position immediately after distraction, at 6 months, and more than 1 year after distraction. After maxillary distraction with a rigid external distraction device, the maxilla (A point) on average moved forward 9.7 mm and downward 3.5 mm immediately after distraction, moved backward 0.9 mm and upward 2.0 mm after 6 months postoperatively, and then moved further backward 2.3 mm and downward 6.8 mm after more than 1 year from the predistraction position. In most cases, maxilla moved forward at distraction and started to move backward until 1 year after distraction, but remained forward, as compared with predistraction position. Maxilla also moved downward during distraction and upward in 6 months, but started descending in 1 year. There also was no further forward growth of the maxilla after distraction in growing children with clefts.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-12-11
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.
Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images
NASA Astrophysics Data System (ADS)
Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian
2015-12-01
With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Monayem, A. K. M.; Mazumder, H.
2015-03-05
A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is amore » fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.« less
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Complex Engineered Systems: A New Paradigm
NASA Astrophysics Data System (ADS)
Mina, Ali A.; Braha, Dan; Bar-Yam, Yaneer
Human history is often seen as an inexorable march towards greater complexity — in ideas, artifacts, social, political and economic systems, technology, and in the structure of life itself. While we do not have detailed knowledge of ancient times, it is reasonable to conclude that the average resident of New York City today faces a world of much greater complexity than the average denizen of Carthage or Tikal. A careful consideration of this change, however, suggests that most of it has occurred recently, and has been driven primarily by the emergence of technology as a force in human life. In the 4000 years separating the Indus Valley Civilization from 18th century Europe, human transportation evolved from the bullock cart to the hansom, and the methods of communication used by George Washington did not differ significantly from those used by Alexander or Rameses. The world has moved radically towards greater complexity in the last two centuries. We have moved from buggies and letter couriers to airplanes and the Internet — an increase in capacity, and through its diversity also in complexity, orders of magnitude greater than that accumulated through the rest of human history. In addition to creating iconic artifacts — the airplane, the car, the computer, the television, etc. — this change has had a profound effect on the scope of experience by creating massive, connected and multiultra- level systems — traffic networks, power grids, markets, multinational corporations — that defy analytical understanding and seem to have a life of their own. This is where complexity truly enters our lives.
Method and apparatus for filtering gas with a moving granular filter bed
Brown, Robert C.; Wistrom, Corey; Smeenk, Jerod L.
2007-12-18
A method and apparatus for filtering gas (58) with a moving granular filter bed (48) involves moving a mass of particulate filter material (48) downwardly through a filter compartment (35); tangentially introducing gas into the compartment (54) to move in a cyclonic path downwardly around the moving filter material (48); diverting the cyclonic path (58) to a vertical path (62) to cause the gas to directly interface with the particulate filter material (48); thence causing the gas to move upwardly through the filter material (48) through a screened partition (24, 32) into a static upper compartment (22) of a filter compartment for exodus (56) of the gas which has passed through the particulate filter material (48).
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Proceedings of the Annual Conference on Manual Control (18th) Held at Dayton, Ohio on 8-10 June 1982
1983-01-01
frequency of the disturbance the probability to cross the borderline becomes larger, and corrective action (moving average value further away-,_. from the...pupillometer. The prototypical data was the average of 10 records from 5 normal subjects who showed similar responses. The different amplitudes of light...following orders touch, position, temperature , and vain. Our subjects sometimes reported numbness in the fingertips, dulled pinprick sensations
Eye tracking and gating system for proton therapy of orbital tumors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, Dongho; Yoo, Seung Hoon; Moon, Sung Ho
2012-07-15
Purpose: A new motion-based gated proton therapy for the treatment of orbital tumors using real-time eye-tracking system was designed and evaluated. Methods: We developed our system by image-pattern matching, using a normalized cross-correlation technique with LabVIEW 8.6 and Vision Assistant 8.6 (National Instruments, Austin, TX). To measure the pixel spacing of an image consistently, four different calibration modes such as the point-detection, the edge-detection, the line-measurement, and the manual measurement mode were suggested and used. After these methods were applied to proton therapy, gating was performed, and radiation dose distributions were evaluated. Results: Moving phantom verification measurements resulted in errorsmore » of less than 0.1 mm for given ranges of translation. Dosimetric evaluation of the beam-gating system versus nongated treatment delivery with a moving phantom shows that while there was only 0.83 mm growth in lateral penumbra for gated radiotherapy, there was 4.95 mm growth in lateral penumbra in case of nongated exposure. The analysis from clinical results suggests that the average of eye movements depends distinctively on each patient by showing 0.44 mm, 0.45 mm, and 0.86 mm for three patients, respectively. Conclusions: The developed automatic eye-tracking based beam-gating system enabled us to perform high-precision proton radiotherapy of orbital tumors.« less
Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior
Bridwell, David A.; Cavanagh, James F.; Collins, Anne G. E.; Nunez, Michael D.; Srinivasan, Ramesh; Stober, Sebastian; Calhoun, Vince D.
2018-01-01
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function. PMID:29632480
Multiview 3D sensing and analysis for high quality point cloud reconstruction
NASA Astrophysics Data System (ADS)
Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard
2018-04-01
Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.
Long-range correlation and market segmentation in bond market
NASA Astrophysics Data System (ADS)
Wang, Zhongxing; Yan, Yan; Chen, Xiaosong
2017-09-01
This paper investigates the long-range auto-correlations and cross-correlations in bond market. Based on Detrended Moving Average (DMA) method, empirical results present a clear evidence of long-range persistence that exists in one year scale. The degree of long-range correlation related to maturities has an upward tendency with a peak in short term. These findings confirm the expectations of fractal market hypothesis (FMH). Furthermore, we have developed a method based on a complex network to study the long-range cross-correlation structure and applied it to our data, and found a clear pattern of market segmentation in the long run. We also detected the nature of long-range correlation in the sub-period 2007-2012 and 2011-2016. The result from our research shows that long-range auto-correlations are decreasing in the recent years while long-range cross-correlations are strengthening.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Multifractal Fluctuations of Jiuzhaigou Tourists Before and after Wenchuan Earthquake
NASA Astrophysics Data System (ADS)
Shi, Kai; Li, Wen-Yong; Liu, Chun-Qiong; Huang, Zheng-Wen
2013-03-01
In this work, multifractal methods have been successfully used to characterize the temporal fluctuations of daily Jiuzhai Valley domestic and foreign tourists before and after Wenchuan earthquake in China. We used multifractal detrending moving average method (MF-DMA). It showed that Jiuzhai Valley tourism markets are characterized by long-term memory and multifractal nature in. Moreover, the major sources of multifractality are studied. Based on the concept of sliding window, the time evolutions of the multifractal behavior of domestic and foreign tourists were analyzed and the influence of Wenchuan earthquake on Jiuzhai Valley tourism system dynamics were evaluated quantitatively. The study indicates that the inherent dynamical mechanism of Jiuzhai Valley tourism system has not been fundamentally changed from long views, although Jiuzhai Valley tourism system was seriously affected by the Wenchuan earthquake. Jiuzhai Valley tourism system has the ability to restore to its previous state in the short term.
Processing data base information having nonwhite noise
Gross, Kenneth C.; Morreale, Patricia
1995-01-01
A method and system for processing a set of data from an industrial process and/or a sensor. The method and system can include processing data from either real or calculated data related to an industrial process variable. One of the data sets can be an artificial signal data set generated by an autoregressive moving average technique. After obtaining two data sets associated with one physical variable, a difference function data set is obtained by determining the arithmetic difference between the two pairs of data sets over time. A frequency domain transformation is made of the difference function data set to obtain Fourier modes describing a composite function data set. A residual function data set is obtained by subtracting the composite function data set from the difference function data set and the residual function data set (free of nonwhite noise) is analyzed by a statistical probability ratio test to provide a validated data base.
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Surface Current Density Mapping for Identification of Gastric Slow Wave Propagation
Bradshaw, L. A.; Cheng, L. K.; Richards, W. O.; Pullan, A. J.
2009-01-01
The magnetogastrogram records clinically relevant parameters of the electrical slow wave of the stomach noninvasively. Besides slow wave frequency, gastric slow wave propagation velocity is a potentially useful clinical indicator of the state of health of gastric tissue, but it is a difficult parameter to determine from noninvasive bioelectric or biomagnetic measurements. We present a method for computing the surface current density (SCD) from multichannel magnetogastrogram recordings that allows computation of the propagation velocity of the gastric slow wave. A moving dipole source model with hypothetical as well as realistic biomagnetometer parameters demonstrates that while a relatively sparse array of magnetometer sensors is sufficient to compute a single average propagation velocity, more detailed information about spatial variations in propagation velocity requires higher density magnetometer arrays. Finally, the method is validated with simultaneous MGG and serosal EMG measurements in a porcine subject. PMID:19403355
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Method and apparatus for a combination moving bed thermal treatment reactor and moving bed filter
Badger, Phillip C.; Dunn, Jr., Kenneth J.
2015-09-01
A moving bed gasification/thermal treatment reactor includes a geometry in which moving bed reactor particles serve as both a moving bed filter and a heat carrier to provide thermal energy for thermal treatment reactions, such that the moving bed filter and the heat carrier are one and the same to remove solid particulates or droplets generated by thermal treatment processes or injected into the moving bed filter from other sources.
Real-time digital signal recovery for a multi-pole low-pass transfer function system.
Lee, Jhinhwan
2017-08-01
In order to solve the problems of waveform distortion and signal delay by many physical and electrical systems with multi-pole linear low-pass transfer characteristics, a simple digital-signal-processing (DSP)-based method of real-time recovery of the original source waveform from the distorted output waveform is proposed. A mathematical analysis on the convolution kernel representation of the single-pole low-pass transfer function shows that the original source waveform can be accurately recovered in real time using a particular moving average algorithm applied on the input stream of the distorted waveform, which can also significantly reduce the overall delay time constant. This method is generalized for multi-pole low-pass systems and has noise characteristics of the inverse of the low-pass filter characteristics. This method can be applied to most sensors and amplifiers operating close to their frequency response limits to improve the overall performance of data acquisition systems and digital feedback control systems.
NASA Astrophysics Data System (ADS)
Zhang, B.; Yu, S.
2018-03-01
In this paper, a beam structure of composite materials with elastic foundation supports is established as the sensor model, which propagates moving sinusoidal wave loads. The inverse Finite Element Method (iFEM) is applied for reconstructing moving wave loads which are compared with true wave loads. The conclusion shows that iFEM is accurate and robust in the determination of wave propagation. This helps to seek a suitable new wave sensor method.
A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.
Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less
A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies
Nangia, Nishant; Johansen, Hans; Patankar, Neelesh A.; ...
2017-10-01
Here, we present a moving control volume (CV) approach to computing hydrodynamic forces and torques on complex geometries. The method requires surface and volumetric integrals over a simple and regular Cartesian box that moves with an arbitrary velocity to enclose the body at all times. The moving box is aligned with Cartesian grid faces, which makes the integral evaluation straightforward in an immersed boundary (IB) framework. Discontinuous and noisy derivatives of velocity and pressure at the fluid–structure interface are avoided and far-field (smooth) velo city and pressure information is used. We re-visit the approach to compute hydrodynamic forces and torquesmore » through force/torque balance equations in a Lagrangian frame that some of us took in a prior work (Bhalla et al., 2013 [13]). We prove the equivalence of the two approaches for IB methods, thanks to the use of Peskin's delta functions. Both approaches are able to suppress spurious force oscillations and are in excellent agreement, as expected theoretically. Test cases ranging from Stokes to high Reynolds number regimes are considered. We discuss regridding issues for the moving CV method in an adaptive mesh refinement (AMR) context. The proposed moving CV method is not limited to a specific IB method and can also be used, for example, with embedded boundary methods.« less
Girls Thrive Emotionally, Boys Falter After Move to Better Neighborhood
... averaging 34 percent, compared to 50 percent for control group families. Mental illness is more prevalent among youth ... compared to 3.5 percent among boys in control group families who did not receive vouchers. Rates of ...
Rippling Dune Front in Herschel Crater on Mars
2011-11-17
A rippled dune front in Herschel Crater on Mars moved an average of about two meters about two yards between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.
Rippling Dune Front in Herschel Crater on Mars
2011-11-17
A rippled dune front in Herschel Crater on Mars moved an average of about one meter about one yard between March 3, 2007 and December 1, 2010, as seen in one of two images from NASA Mars Reconnaissance Orbiter.
Shifting Sand in Herschel Crater
2011-11-17
The eastern margin of a rippled dune in Herschel Crater on Mars moved an average distance of three meters about three yards between March 3, 2007 and December 1, 2010, in one of two images taken by NASA Mars Reconnaissance Orbiter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
2015-06-15
Purpose: A method using four-dimensional(4D) PET/CT in design of radiation treatment planning was proposed and the target volume and radiation dose distribution changes relative to standard three-dimensional (3D) PET/CT were examined. Methods: A target deformable registration method was used by which the whole patient’s respiration process was considered and the effect of respiration motion was minimized when designing radiotherapy planning. The gross tumor volume of a non-small-cell lung cancer was contoured on the 4D FDG-PET/CT and 3D PET/CT scans by use of two different techniques: manual contouring by an experienced radiation oncologist using a predetermined protocol; another technique using amore » constant threshold of standardized uptake value (SUV) greater than 2.5. The target volume and radiotherapy dose distribution between VOL3D and VOL4D were analyzed. Results: For all phases, the average automatic and manually GTV volume was 18.61 cm3 (range, 16.39–22.03 cm3) and 31.29 cm3 (range, 30.11–35.55 cm3), respectively. The automatic and manually volume of merged IGTV were 27.82 cm3 and 49.37 cm3, respectively. For the manual contour, compared to 3D plan the mean dose for the left, right, and total lung of 4D plan have an average decrease 21.55%, 15.17% and 15.86%, respectively. The maximum dose of spinal cord has an average decrease 2.35%. For the automatic contour, the mean dose for the left, right, and total lung have an average decrease 23.48%, 16.84% and 17.44%, respectively. The maximum dose of spinal cord has an average decrease 1.68%. Conclusion: In comparison to 3D PET/CT, 4D PET/CT may better define the extent of moving tumors and reduce the contouring tumor volume thereby optimize radiation treatment planning for lung tumors.« less
Computer simulation of concentrated solid solution strengthening
NASA Technical Reports Server (NTRS)
Kuo, C. T. K.; Arsenault, R. J.
1976-01-01
The interaction forces between a straight edge dislocation moving through a three-dimensional block containing a random array of solute atoms were determined. The yield stress at 0 K was obtained by determining the average maximum solute-dislocation interaction force that is encountered by edge dislocation, and an expression relating the yield stress to the length of the dislocation and the solute concentration is provided. The magnitude of the solid solution strengthening due to solute atoms can be determined directly from the numerical results, provided the dislocation line length that moves as a unit is specified.
A comparison of four streamflow record extension techniques
Hirsch, Robert M.
1982-01-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., ‘line of organic correlation,’ ‘reduced major axis,’ ‘unique solution,’ and ‘equivalence line.’ The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
A Comparison of Four Streamflow Record Extension Techniques
NASA Astrophysics Data System (ADS)
Hirsch, Robert M.
1982-08-01
One approach to developing time series of streamflow, which may be used for simulation and optimization studies of water resources development activities, is to extend an existing gage record in time by exploiting the interstation correlation between the station of interest and some nearby (long-term) base station. Four methods of extension are described, and their properties are explored. The methods are regression (REG), regression plus noise (RPN), and two new methods, maintenance of variance extension types 1 and 2 (MOVE.l, MOVE.2). MOVE.l is equivalent to a method which is widely used in psychology, biometrics, and geomorphology and which has been called by various names, e.g., `line of organic correlation,' `reduced major axis,' `unique solution,' and `equivalence line.' The methods are examined for bias and standard error of estimate of moments and order statistics, and an empirical examination is made of the preservation of historic low-flow characteristics using 50-year-long monthly records from seven streams. The REG and RPN methods are shown to have serious deficiencies as record extension techniques. MOVE.2 is shown to be marginally better than MOVE.l, according to the various comparisons of bias and accuracy.
Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H
2015-01-01
Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.
Monitoring Moving Queries inside a Safe Region
Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan
2014-01-01
With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652
A stochastic post-processing method for solar irradiance forecasts derived from NWPs models
NASA Astrophysics Data System (ADS)
Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.
2010-09-01
Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.
NASA Technical Reports Server (NTRS)
Abraham, Arick Reed A.; Johnson, Kenneth L.; Nichols, Charles T.; Saulsberry, Regor L.; Waller, Jess M.
2012-01-01
Broadband modal acoustic emission (AE) data were acquired during intermittent load hold tensile test profiles on Toray T1000G carbon fiber-reinforced epoxy (C/Ep) single tow specimens. A novel trend seeking statistical method to determine the onset of significant AE was developed, resulting in more linear decreases in the Felicity ratio (FR) with load, potentially leading to more accurate failure prediction. The method developed uses an exponentially weighted moving average (EWMA) control chart. Comparison of the EWMA with previously used FR onset methods, namely the discrete (n), mean (n (raised bar)), normalized (n%) and normalized mean (n(raised bar)%) methods, revealed the EWMA method yields more consistently linear FR versus load relationships between specimens. Other findings include a correlation between AE data richness and FR linearity based on the FR methods discussed in this paper, and evidence of premature failure at lower than expected loads. Application of the EWMA method should be extended to other composite materials and, eventually, composite components such as composite overwrapped pressure vessels. Furthermore, future experiments should attempt to uncover the factors responsible for infant mortality in C/Ep strands.
Molecular dynamics simulation on the elastoplastic properties of copper nanowire under torsion
NASA Astrophysics Data System (ADS)
Yang, Yong; Li, Ying; Yang, Zailin; Zhang, Guowei; Wang, Xizhi; Liu, Jin
2018-02-01
Influences of different factors on the torsion properties of single crystal copper nanowire are studied by molecular dynamics method. The length, torsional rate, and temperature of the nanowire are discussed at the elastic-plastic critical point. According to the average potential energy curve and shear stress curve, the elastic-plastic critical angle is determined. Also, the dislocation at elastoplastic critical points is analyzed. The simulation results show that the single crystal copper nanowire can be strengthened by lengthening the model, decreasing the torsional rate, and lowering the temperature. Moreover, atoms move violently and dislocation is more likely to occur with a higher temperature. This work mainly describes the mechanical behavior of the model under different states.
Two-dimensional convolute integers for analytical instrumentation
NASA Technical Reports Server (NTRS)
Edwards, T. R.
1982-01-01
As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2016-01-01
In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.
Dynamo Enhancement and Mode Selection Triggered by High Magnetic Permeability.
Kreuzahler, S; Ponty, Y; Plihon, N; Homann, H; Grauer, R
2017-12-08
We present results from consistent dynamo simulations, where the electrically conducting and incompressible flow inside a cylinder vessel is forced by moving impellers numerically implemented by a penalization method. The numerical scheme models jumps of magnetic permeability for the solid impellers, resembling various configurations tested experimentally in the von Kármán sodium experiment. The most striking experimental observations are reproduced in our set of simulations. In particular, we report on the existence of a time-averaged axisymmetric dynamo mode, self-consistently generated when the magnetic permeability of the impellers exceeds a threshold. We describe a possible scenario involving both the turbulent flow in the vicinity of the impellers and the high magnetic permeability of the impellers.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Numerical solution of the time fractional reaction-diffusion equation with a moving boundary
NASA Astrophysics Data System (ADS)
Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.
2017-06-01
A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.
A study of video frame rate on the perception of moving imagery detail
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.
Koerbin, Gus; Liu, Jiakai; Eigenstetter, Alex; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-15
A product recall was issued for the Roche/Hitachi Cobas Gentamicin II assays on 25 th May 2016 in Australia, after a 15 - 20% positive analytical shift was discovered. Laboratories were advised to employ the Thermo Fisher Gentamicin assay as an alternative. Following the reintroduction of the revised assay on 12 th September 2016, a second reagent recall was made on 20 th March 2017 after the discovery of a 20% negative analytical shift due to erroneous instrument adjustment factor. The practices of an index laboratory were examined to determine how the analytical shifts evaded detection by routine internal quality control (IQC) and external quality assurance (EQA) systems. The ability of the patient result-based approaches, including moving average (MovAvg) and moving sum of outliers (MovSO) approaches in detecting these shifts were examined. Internal quality control data of the index laboratory were acceptable prior to the product recall. The practice of adjusting IQC target following a change in assay method resulted in the missed negative shift when the revised Roche assay was reintroduced. While the EQA data of the Roche subgroup showed clear negative bias relative to other laboratory methods, the results were considered as possible 'matrix effect'. The MovAvg method detected the positive shift before the product recall. The MovSO did not detect the negative shift in the index laboratory but did so in another laboratory 5 days before the second product recall. There are gaps in current laboratory quality practices that leave room for analytical errors to evade detection.
The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network
NASA Astrophysics Data System (ADS)
Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.
2017-05-01
The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.
Alsina, Adolfo; Lai, Wu Ming; Wong, Wai Kin; Qin, Xianan; Zhang, Min; Park, Hyokeun
2017-11-04
Mitochondria are essential for cellular survival and function. In neurons, mitochondria are transported to various subcellular regions as needed. Thus, defects in the axonal transport of mitochondria are related to the pathogenesis of neurodegenerative diseases, and the movement of mitochondria has been the subject of intense research. However, the inability to accurately track mitochondria with subpixel accuracy has hindered this research. Here, we report an automated method for tracking mitochondria based on the center of fluorescence. This tracking method, which is accurate to approximately one-tenth of a pixel, uses the centroid of an individual mitochondrion and provides information regarding the distance traveled between consecutive imaging frames, instantaneous speed, net distance traveled, and average speed. Importantly, this new tracking method enables researchers to observe both directed motion and undirected movement (i.e., in which the mitochondrion moves randomly within a small region, following a sub-diffusive motion). This method significantly improves our ability to analyze the movement of mitochondria and sheds light on the dynamic features of mitochondrial movement. Copyright © 2017 Elsevier Inc. All rights reserved.
Does preprocessing change nonlinear measures of heart rate variability?
Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A
2002-11-01
This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.
NASA Astrophysics Data System (ADS)
Qiao, Chuan; Ding, Yalin; Xu, Yongsen; Xiu, Jihong
2018-01-01
To obtain the geographical position of the ground target accurately, a geolocation algorithm based on the digital elevation model (DEM) is developed for an airborne wide-area reconnaissance system. According to the platform position and attitude information measured by the airborne position and orientation system and the gimbal angles information from the encoder, the line-of-sight pointing vector in the Earth-centered Earth-fixed coordinate frame is solved by the homogeneous coordinate transformation. The target longitude and latitude can be solved with the elliptical Earth model and the global DEM. The influences of the systematic error and measurement error on ground target geolocation calculation accuracy are analyzed by the Monte Carlo method. The simulation results show that this algorithm can improve the geolocation accuracy of ground target in rough terrain area obviously. The geolocation accuracy of moving ground target can be improved by moving average filtering (MAF). The validity of the geolocation algorithm is verified by the flight test in which the plane flies at a geodetic height of 15,000 m and the outer gimbal angle is <47°. The geolocation root mean square error of the target trajectory is <45 and <7 m after MAF.
Using optical tweezers to relate the chemical and mechanical cross-bridge cycles.
Steffen, Walter; Sleep, John
2004-12-29
In most current models of muscle contraction there are two translational steps, the working stroke, whereby an attached myosin cross-bridge moves relative to the actin filament, and the repriming step, in which the cross-bridge returns to its original orientation. The development of single molecule methods has allowed a more detailed investigation of the relationship of these mechanical steps to the underlying biochemistry. In the normal adenosine triphosphate cycle, myosin.adenosine diphosphate.phosphate (M.ADP.Pi) binds to actin and moves it by ca. 5 nm on average before the formation of the end product, the rigor actomyosin state. All the other product-like intermediate states tested were found to give no net movement indicating that M.ADP.Pi alone binds in a pre-force state. Myosin states with bound, unhydrolysed nucleoside triphosphates also give no net movement, indicating that these must also bind in a post-force conformation and that the repriming, post- to pre-transition during the forward cycle must take place while the myosin is dissociated from actin. These observations fit in well with the structural model in which the working stroke is aligned to the opening of the switch 2 element of the ATPase site.
Quantum hydrodynamics: capturing a reactive scattering resonance.
Derrickson, Sean W; Bittner, Eric R; Kendrick, Brian K
2005-08-01
The hydrodynamic equations of motion associated with the de Broglie-Bohm formulation of quantum mechanics are solved using a meshless method based upon a moving least-squares approach. An arbitrary Lagrangian-Eulerian frame of reference and a regridding algorithm which adds and deletes computational points are used to maintain a uniform and nearly constant interparticle spacing. The methodology also uses averaged fields to maintain unitary time evolution. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. A new and more robust artificial viscosity algorithm is presented which gives accurate scattering results and is capable of capturing quantum resonances. The methodology is applied to a one-dimensional model chemical reaction that is known to exhibit a quantum resonance. The correlation function approach is used to compute the reactive scattering matrix, reaction probability, and time delay as a function of energy. Excellent agreement is obtained between the scattering results based upon the quantum hydrodynamic approach and those based upon standard quantum mechanics. This is the first clear demonstration of the ability of moving grid approaches to accurately and robustly reproduce resonance structures in a scattering system.
Active and passive transport of cargo in a corrugated channel: A lattice model study
NASA Astrophysics Data System (ADS)
Dey, Supravat; Ching, Kevin; Das, Moumita
2018-04-01
Inside cells, cargos such as vesicles and organelles are transported by molecular motors to their correct locations via active motion on cytoskeletal tracks and passive, Brownian diffusion. During the transportation of cargos, motor-cargo complexes (MCCs) navigate the confining and crowded environment of the cytoskeletal network and other macromolecules. Motivated by this, we study a minimal two-state model of motor-driven cargo transport in confinement and predict transport properties that can be tested in experiments. We assume that the motion of the MCC is directly affected by the entropic barrier due to confinement if it is in the passive, unbound state but not in the active, bound state where it moves with a constant bound velocity. We construct a lattice model based on a Fokker Planck description of the two-state system, study it using a kinetic Monte Carlo method and compare our numerical results with analytical expressions for a mean field limit. We find that the effect of confinement strongly depends on the bound velocity and the binding kinetics of the MCC. Confinement effectively reduces the effective diffusivity and average velocity, except when it results in an enhanced average binding rate and thereby leads to a larger average velocity than when unconfined.
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Patankar, Neelesh A.; Bhalla, Amneet P. S.
2017-11-01
Fictitious domain methods for simulating fluid-structure interaction (FSI) have been gaining popularity in the past few decades because of their robustness in handling arbitrarily moving bodies. Often the transient net hydrodynamic forces and torques on the body are desired quantities for these types of simulations. In past studies using immersed boundary (IB) methods, force measurements are contaminated with spurious oscillations due to evaluation of possibly discontinuous spatial velocity of pressure gradients within or on the surface of the body. Based on an application of the Reynolds transport theorem, we present a moving control volume (CV) approach to computing the net forces and torques on a moving body immersed in a fluid. The approach is shown to be accurate for a wide array of FSI problems, including flow past stationary and moving objects, Stokes flow, and high Reynolds number free-swimming. The approach only requires far-field (smooth) velocity and pressure information, thereby suppressing spurious force oscillations and eliminating the need for any filtering. The proposed moving CV method is not limited to a specific IB method and is straightforward to implement within an existing parallel FSI simulation software. This work is supported by NSF (Award Numbers SI2-SSI-1450374, SI2-SSI-1450327, and DGE-1324585), the US Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231), and NIH (Award Number HL117163).
REVIEW ARTICLE: Hither and yon: a review of bi-directional microtubule-based transport
NASA Astrophysics Data System (ADS)
Gross, Steven P.
2004-06-01
Active transport is critical for cellular organization and function, and impaired transport has been linked to diseases such as neuronal degeneration. Much long distance transport in cells uses opposite polarity molecular motors of the kinesin and dynein families to move cargos along microtubules. It is increasingly clear that many cargos are moved by both sets of motors, and frequently reverse course. This review compares this bi-directional transport to the more well studied uni-directional transport. It discusses some bi-directionally moving cargos, and critically evaluates three different physical models for how such transport might occur. It then considers the evidence for the number of active motors per cargo, and how the net or average direction of transport might be controlled. The likelihood of a complex linking the activities of kinesin and dynein is also discussed. The paper concludes by reviewing elements of apparent universality between different bi-directionally moving cargos and by briefly considering possible reasons for the existence of bi-directional transport.
Random walk of passive tracers among randomly moving obstacles.
Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco
2016-04-14
This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
Modeling the Effect of Summertime Heating on Urban Runoff Temperature
NASA Astrophysics Data System (ADS)
Thompson, A. M.; Gemechu, A. L.; Norman, J. M.; Roa-Espinosa, A.
2007-12-01
Urban impervious surfaces absorb and store thermal energy, particularly during warm summer months. During a rainfall/runoff event, thermal energy is transferred from the impervious surface to the runoff, causing it to become warmer. As this higher temperature runoff enters receiving waters, it can be harmful to coldwater habitat. A simple model has been developed for the net energy flux at the impervious surfaces of urban areas to account for the heat transferred to runoff. Runoff temperature is determined as a function of the physical characteristics of the impervious areas, the weather, and the heat transfer between the moving film of runoff and the heated impervious surfaces that commonly exist in urban areas. Runoff from pervious surfaces was predicted using the Green- Ampt Mein-Larson infiltration excess method. Theoretical results were compared to experimental results obtained from a plot-scale field study conducted at the University of Wisconsin's West Madison Agricultural Research Station. Surface temperatures and runoff temperatures from asphalt and sod plots were measured throughout 15 rainfall simulations under various climatic conditions during the summers of 2004 and 2005. Average asphalt runoff temperatures ranged from 23.2°C to 37.1°C. Predicted asphalt runoff temperatures were in close agreement with measured values for most of the simulations (average RMSE = 4.0°C). Average pervious runoff temperatures ranged from 19.7° to 29.9°C and were closely approximated by the rainfall temperature (RMSE = 2.8°C). Predicted combined asphalt and sod runoff temperatures using a flow-weighted average were in close agreement with observed values (average RMSE = 3.5°C).
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Detection of Copy-Rotate-Move Forgery Using Zernike Moments
NASA Astrophysics Data System (ADS)
Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu
As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.
Survey of Tsunamis Formed by Atmospheric Forcing on the East Coast of the United States
NASA Astrophysics Data System (ADS)
Lodise, J.; Shen, Y.; Wertman, C. A.
2014-12-01
High-frequency sea level oscillations along the United States East Coast have been linked to atmospheric pressure disturbances observed during large storm events. These oscillations have periods similar to tsunami events generated by earthquakes and submarine landslides, but are created by moving surface pressure anomalies within storm systems such as mesoscale convective systems or mid-latitude cyclones. Meteotsunamis form as in-situ waves, directly underneath a moving surface pressure anomaly. As the pressure disturbances move off the east coast of North America and over the continental shelf in the Atlantic Ocean, Proudman resonance, which is known to enhance the amplitude of the meteotsunami, may occur when the propagation speed of the pressure disturbance is equal to that of the shallow water wave speed. At the continental shelf break, some of the meteotsunami waves are reflected back towards the coast. The events we studied date from 2007 to 2014, most of which were identified using an atmospheric pressure anomaly detection method applied to atmospheric data from two National Data Buoy Center stations: Cape May, New Jersey and Newport, Rhode Island. The coastal tidal records used to observe the meteotsunami amplitudes include Montauk, New York; Atlantic City, New Jersey; and Duck, North Carolina. On average, meteotsunamis ranging from 0.1m to 1m in amplitude occurred roughly twice per month, with meteotsunamis larger than 0.4m occurring approximately 4 times per year, a rate much higher than previously reported. For each event, the amplitude of the recorded pressure disturbance was compared to the meteotsunami amplitude, while radar and bathymetry data were analyzed to observe the influence of Proudman resonance on the reflected meteotsunami waves. In-situ meteotsunami amplitudes showed a direct correlation with the amplitude of pressure disturbances. Meteotsunamis reflected off the continental shelf break were generally higher in amplitude when the average storm speed was closer to that of the shallow water wave speed, which suggests that Proudman resonance has a significant influence on meteotsunami amplitude over the continental shelf. Through the application of these findings the frequency and severity of future meteotsunamis can be better predicted along the east coast of the United States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Souris, K; Barragan Montero, A; Di Perri, D
Purpose: The shift in mean position of a moving tumor also known as “baseline shift”, has been modeled, in order to automatically generate uncertainty scenarios for the assessment and robust optimization of proton therapy treatments in lung cancer. Methods: An average CT scan and a Mid-Position CT scan (MidPCT) of the patient at the planning time are first generated from a 4D-CT data. The mean position of the tumor along the breathing cycle is represented by the GTV contour in the MidPCT. Several studies reported both systematic and random variations of the mean tumor position from fraction to fraction. Ourmore » model can simulate this baseline shift by generating a local deformation field that moves the tumor on all phases of the 4D-CT, without creating any non-physical artifact. The deformation field is comprised of normal and tangential components with respect to the lung wall in order to allow the tumor to slip within the lung instead of deforming the lung surface. The deformation field is eventually smoothed in order to enforce its continuity. Two 4D-CT series acquired at 1 week of interval were used to validate the model. Results: Based on the first 4D-CT set, the model was able to generate a third 4D-CT that reproduced the 5.8 mm baseline-shift measured in the second 4D-CT. Water equivalent thickness (WET) of the voxels have been computed for the 3 average CTs. The root mean square deviation of the WET in the GTV is 0.34 mm between week 1 and week 2, and 0.08 mm between the simulated data and week 2. Conclusion: Our model can be used to automatically generate uncertainty scenarios for robustness analysis of a proton therapy plan. The generated scenarios can also feed a TPS equipped with a robust optimizer. Kevin Souris, Ana Barragan, and Dario Di Perri are financially supported by Televie Grants from F.R.S.-FNRS.« less
Social dynamics in emergency evacuations: Disentangling crowd's attraction and repulsion effects
NASA Astrophysics Data System (ADS)
Haghani, Milad; Sarvi, Majid
2017-06-01
The social dynamics of crowds in emergency escape scenarios have been conventionally modelled as the net effect of virtual forces exerted by the crowd on each individual (as self-driven particles), with the magnitude of the influence formulated as decreasing functions of inter-individual distances and the direction of effect assumed to be transitioning from repulsion to attraction by distance. Here, we revisit this conventional assumption using laboratory experimental data. We show based on robust econometric hypothesis-testing methods that individuals' perception of other escapees differs based on whether those individuals are jamming around exit destinations or are on the move towards the destinations. Also, for moving crowds, it differs based on whether the escape destination chosen by the moving flow is visible or invisible to the individual. The presence of crowd jams around a destination, also the movement of crowd flows towards visible destinations are both perceived on average as repulsion (or disutility) effects (with the former showing significantly larger magnitude than the latter). The movement of crowd flows towards an invisible destination, however, is on average perceived as attraction (or utility) effect. Yet, further hypothesis testing showed that neither of those effects in isolation determines adequately whether an individual would merge with or diverge from the crowd. Rather, the social interaction factors act (at significant levels) in conjunction with the physical factors of the environments (including spatial distances to exit destinations and destinations' visibility). In brief, our finding disentangles the conditions under which individuals are more likely to show mass behaviour from the situations where they are more likely to break from the herd. It identifies two factors that moderate the perception of social interactions, ;crowds' jam/movement status; and ;environmental setup;. Our results particularly challenge the taxonomy of attraction-repulsion social interaction forces defined purely based on the distance of the individual to the surrounding crowd, by showing that crowds could be in far distance and yet be perceived as repulsion effect, or they could be in close distance and yet act as attraction effect.
A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques
NASA Astrophysics Data System (ADS)
Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.
Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement
Wang, Junjie; He, Xiufeng; Ferreira, Vagner G.
2015-01-01
Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method. PMID:26262620
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods
NASA Astrophysics Data System (ADS)
Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David
2015-11-01
Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-01-01
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel “quasi-dynamic” Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the “process-level” fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move. PMID:26690447
Commercial vehicle fleet management and information systems. Phase 1 : interim report
DOT National Transportation Integrated Search
1998-01-01
The three-quarter moving composite price index is the weighted average of the indices for three consecutive quarters. The Composite Bid Price Index is composed of six indicator items: common excavation, to indicate the price trend for all roadway exc...
Handique, Bijoy K; Khan, Siraj A; Mahanta, J; Sudhakar, S
2014-09-01
Japanese encephalitis (JE) is one of the dreaded mosquito-borne viral diseases mostly prevalent in south Asian countries including India. Early warning of the disease in terms of disease intensity is crucial for taking adequate and appropriate intervention measures. The present study was carried out in Dibrugarh district in the state of Assam located in the northeastern region of India to assess the accuracy of selected forecasting methods based on historical morbidity patterns of JE incidence during the past 22 years (1985-2006). Four selected forecasting methods, viz. seasonal average (SA), seasonal adjustment with last three observations (SAT), modified method adjusting long-term and cyclic trend (MSAT), and autoregressive integrated moving average (ARIMA) have been employed to assess the accuracy of each of the forecasting methods. The forecasting methods were validated for five consecutive years from 2007-2012 and accuracy of each method has been assessed. The forecasting method utilising seasonal adjustment with long-term and cyclic trend emerged as best forecasting method among the four selected forecasting methods and outperformed the even statistically more advanced ARIMA method. Peak of the disease incidence could effectively be predicted with all the methods, but there are significant variations in magnitude of forecast errors among the selected methods. As expected, variation in forecasts at primary health centre (PHC) level is wide as compared to that of district level forecasts. The study showed that adopted forecasting techniques could reasonably forecast the intensity of JE cases at PHC level without considering the external variables. The results indicate that the understanding of long-term and cyclic trend of the disease intensity will improve the accuracy of the forecasts, but there is a need for making the forecast models more robust to explain sudden variation in the disease intensity with detail analysis of parasite and host population dynamics.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2009-01-01
Yearly frequencies of North Atlantic basin tropical cyclones, their locations of origin, peak wind speeds, average peak wind speeds, lowest pressures, and average lowest pressures for the interval 1950-2008 are examined. The effects of El Nino and La Nina on the tropical cyclone parametric values are investigated. Yearly and 10-year moving average (10-yma) values of tropical cyclone parameters are compared against those of temperature and decadal-length oscillation, employing both linear and bi-variate analysis, and first differences in the 10-yma are determined. Discussion of the 2009 North Atlantic basin hurricane season, updating earlier results, is given.
Chess-playing epilepsy: a case report with video-EEG and back averaging.
Mann, M W; Gueguen, B; Guillou, S; Debrand, E; Soufflet, C
2004-12-01
A patient suffering from juvenile myoclonic epilepsy experienced myoclonic jerks, fairly regularly, while playing chess. The myoclonus appeared particularly when he had to plan his strategy, to choose between two solutions or while raising the arm to move a chess figure. Video-EEG-polygraphy was performed, with back averaging of the myoclonus registered during a chess match and during neuropsychological testing with Kohs cubes. The EEG spike wave complexes were localised in the fronto-central region. [Published with video sequences].
Verity Salmon; Colleen Iversen; Peter Thornton; Ma
2017-03-01
Transect data is from point center quarter surveys for shrub density performed in July 2016 at the Kougarok hill slope located at Kougarok Road, Mile Marker 64. For each sample point along the transects, moving averages for shrub density and shrub basal area are provided along with GPS coordinates, average shrub height and active layer depth. The individual height, basal area, and species of surveyed shrubs are also included. Data upload will be completed January 2017.
2016-11-22
Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e