Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
Gerber, Brian D.; Kendall, William L.
2017-01-01
Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
NASA Astrophysics Data System (ADS)
Leite, Argentina; Paula Rocha, Ana; Eduarda Silva, Maria
2013-06-01
Heart Rate Variability (HRV) series exhibit long memory and time-varying conditional variance. This work considers the Fractionally Integrated AutoRegressive Moving Average (ARFIMA) models with Generalized AutoRegressive Conditional Heteroscedastic (GARCH) errors. ARFIMA-GARCH models may be used to capture and remove long memory and estimate the conditional volatility in 24 h HRV recordings. The ARFIMA-GARCH approach is applied to fifteen long term HRV series available at Physionet, leading to the discrimination among normal individuals, heart failure patients, and patients with atrial fibrillation.
Quantifying rapid changes in cardiovascular state with a moving ensemble average.
Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T
2018-04-01
MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
The Performance of Multilevel Growth Curve Models under an Autoregressive Moving Average Process
ERIC Educational Resources Information Center
Murphy, Daniel L.; Pituch, Keenan A.
2009-01-01
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David
2013-07-01
The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.
2008-01-01
Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
NASA Astrophysics Data System (ADS)
Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.
2017-03-01
Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average <2%, average 2-3.49%, above average 3.5-4.99%, moderate 5-7.99%, high >=8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.
ERIC Educational Resources Information Center
Gaines, Gale F.
Focused state efforts have helped teacher salaries in Southern Regional Education Board (SREB) states move toward the national average. Preliminary 2000-01 estimates put SREB's average teacher salary at its highest point in 22 years compared to the national average. The SREB average teacher salary is approximately 90 percent of the national…
Sando, Steven K.; Sando, Roy; McCarthy, Peter M.; Dutton, DeAnn M.
2016-04-05
The climatic conditions of the specific time period during which peak-flow data were collected at a given streamflow-gaging station (hereinafter referred to as gaging station) can substantially affect how well the peak-flow frequency (hereinafter referred to as frequency) results represent long-term hydrologic conditions. Differences in the timing of the periods of record can result in substantial inconsistencies in frequency estimates for hydrologically similar gaging stations. Potential for inconsistency increases with decreasing peak-flow record length. The representativeness of the frequency estimates for a short-term gaging station can be adjusted by various methods including weighting the at-site results in association with frequency estimates from regional regression equations (RREs) by using the Weighted Independent Estimates (WIE) program. Also, for gaging stations that cannot be adjusted by using the WIE program because of regulation or drainage areas too large for application of RREs, frequency estimates might be improved by using record extension procedures, including a mixed-station analysis using the maintenance of variance type I (MOVE.1) procedure. The U.S. Geological Survey, in cooperation with the Montana Department of Transportation and the Montana Department of Natural Resources and Conservation, completed a study to provide adjusted frequency estimates for selected gaging stations through water year 2011.The purpose of Chapter D of this Scientific Investigations Report is to present adjusted frequency estimates for 504 selected streamflow-gaging stations in or near Montana based on data through water year 2011. Estimates of peak-flow magnitudes for the 66.7-, 50-, 42.9-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities are reported. These annual exceedance probabilities correspond to the 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.The at-site frequency estimates were adjusted by weighting with frequency estimates from RREs using the WIE program for 438 selected gaging stations in Montana. These 438 selected gaging stations (1) had periods of record less than or equal to 40 years, (2) represented unregulated or minor regulation conditions, and (3) had drainage areas less than about 2,750 square miles.The weighted-average frequency estimates obtained by weighting with RREs generally are considered to provide improved frequency estimates. In some cases, there are substantial differences among the at-site frequency estimates, the regression-equation frequency estimates, and the weighted-average frequency estimates. In these cases, thoughtful consideration should be applied when selecting the appropriate frequency estimate. Some factors that might be considered when selecting the appropriate frequency estimate include (1) whether the specific gaging station has peak-flow characteristics that distinguish it from most other gaging stations used in developing the RREs for the hydrologic region; and (2) the length of the peak-flow record and the general climatic characteristics during the period when the peak-flow data were collected. For critical structure-design applications, a conservative approach would be to select the higher of the at-site frequency estimate and the weighted-average frequency estimate.The mixed-station MOVE.1 procedure generally was applied in cases where three or more gaging stations were located on the same large river and some of the gaging stations could not be adjusted using the weighted-average method because of regulation or drainage areas too large for application of RREs. The mixed-station MOVE.1 procedure was applied to 66 selected gaging stations on 19 large rivers.The general approach for using mixed-station record extension procedures to adjust at-site frequencies involved (1) determining appropriate base periods for the gaging stations on the large rivers, (2) synthesizing peak-flow data for the gaging stations with incomplete peak-flow records during the base periods by using the mixed-station MOVE.1 procedure, and (3) conducting frequency analysis on the combined recorded and synthesized peak-flow data for each gaging station. Frequency estimates for the combined recorded and synthesized datasets for 66 gaging stations with incomplete peak-flow records during the base periods are presented. The uncertainties in the mixed-station record extension results are difficult to directly quantify; thus, it is important to understand the intended use of the estimated frequencies based on analysis of the combined recorded and synthesized datasets. The estimated frequencies are considered general estimates of frequency relations among gaging stations on the same stream channel that might be expected if the gaging stations had been gaged during the same long-term base period. However, because the mixed-station record extension procedures involve secondary statistical analysis with accompanying errors, the uncertainty of the frequency estimates is larger than would be obtained by collecting systematic records for the same number of years in the base period.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; Essa, K. S.
2015-02-01
We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.
Challenges of Electronic Medical Surveillance Systems
2004-06-01
More sophisticated approaches, such as regression models and classical autoregressive moving average ( ARIMA ) models that make estimates based on...with those predicted by a mathematical model . The primary benefit of ARIMA models is their ability to correct for local trends in the data so that...works well, for example, during a particularly severe flu season, where prolonged periods of high visit rates are adjusted to by the ARIMA model , thus
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
In-use activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks.
Sandhu, Gurdas S; Frey, H Christopher; Bartelt-Hunt, Shannon; Jones, Elizabeth
2015-03-01
The objectives of this study were to quantify real-world activity, fuel use, and emissions for heavy duty diesel roll-off refuse trucks; evaluate the contribution of duty cycles and emissions controls to variability in cycle average fuel use and emission rates; quantify the effect of vehicle weight on fuel use and emission rates; and compare empirical cycle average emission rates with the U.S. Environmental Protection Agency's MOVES emission factor model predictions. Measurements were made at 1 Hz on six trucks of model years 2005 to 2012, using onboard systems. The trucks traveled 870 miles, had an average speed of 16 mph, and collected 165 tons of trash. The average fuel economy was 4.4 mpg, which is approximately twice previously reported values for residential trash collection trucks. On average, 50% of time is spent idling and about 58% of emissions occur in urban areas. Newer trucks with selective catalytic reduction and diesel particulate filter had NOx and PM cycle average emission rates that were 80% lower and 95% lower, respectively, compared to older trucks without. On average, the combined can and trash weight was about 55% of chassis weight. The marginal effect of vehicle weight on fuel use and emissions is highest at low loads and decreases as load increases. Among 36 cycle average rates (6 trucks×6 cycles), MOVES-predicted values and estimates based on real-world data have similar relative trends. MOVES-predicted CO2 emissions are similar to those of the real world, while NOx and PM emissions are, on average, 43% lower and 300% higher, respectively. The real-world data presented here can be used to estimate benefits of replacing old trucks with new trucks. Further, the data can be used to improve emission inventories and model predictions. In-use measurements of the real-world activity, fuel use, and emissions of heavy-duty diesel roll-off refuse trucks can be used to improve the accuracy of predictive models, such as MOVES, and emissions inventories. Further, the activity data from this study can be used to generate more representative duty cycles for more accurate chassis dynamometer testing. Comparisons of old and new model year diesel trucks are useful in analyzing the effect of fleet turnover. The analysis of effect of haul weight on fuel use can be used by fleet managers to optimize operations to reduce fuel cost.
NASA Astrophysics Data System (ADS)
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick
2009-08-01
Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Integration of social information by human groups
Granovskiy, Boris; Gold, Jason M.; Sumpter, David; Goldstone, Robert L.
2015-01-01
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0 and 100% (e.g., ‘What percentage of Americans are left-handed?’). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move towards the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. PMID:26189568
Integration of Social Information by Human Groups.
Granovskiy, Boris; Gold, Jason M; Sumpter, David J T; Goldstone, Robert L
2015-07-01
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. Copyright © 2015 Cognitive Science Society, Inc.
Dexter, F
2000-10-01
We examined how to program an operating room (OR) information system to assist the OR manager in deciding whether to move the last case of the day in one OR to another OR that is empty to decrease overtime labor costs. We first developed a statistical strategy to predict whether moving the case would decrease overtime labor costs for first shift nurses and anesthesia providers. The strategy was based on using historical case duration data stored in a surgical services information system. Second, we estimated the incremental overtime labor costs achieved if our strategy was used for moving cases versus movement of cases by an OR manager who knew in advance exactly how long each case would last. We found that if our strategy was used to decide whether to move cases, then depending on parameter values, only 2.0 to 4.3 more min of overtime would be required per case than if the OR manager had perfect retrospective knowledge of case durations. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved. The use of other information technologies to assist in the decision of whether to move a case, such as real-time patient tracking information systems, closed-circuit cameras, or graphical airport-style displays, can, on average, reduce overtime by no more than only 2 to 4 min per case that can be moved.
An efficient estimator to monitor rapidly changing forest conditions
Raymond L. Czaplewski; Michael T. Thompson; Gretchen G. Moisen
2012-01-01
Extensive expanses of forest often change at a slow pace. In this common situation, FIA produces informative estimates of current status with the Moving Average (MA) method and post-stratification with a remotely sensed map of forest-nonforest cover. However, MA "smoothes out" estimates over time, which confounds analyses of temporal trends; and post-...
Rodríguez, Nibaldo
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Shao, Ying-Hui; Gu, Gao-Feng; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Sornette, Didier
2012-01-01
Notwithstanding the significant efforts to develop estimators of long-range correlations (LRC) and to compare their performance, no clear consensus exists on what is the best method and under which conditions. In addition, synthetic tests suggest that the performance of LRC estimators varies when using different generators of LRC time series. Here, we compare the performances of four estimators [Fluctuation Analysis (FA), Detrended Fluctuation Analysis (DFA), Backward Detrending Moving Average (BDMA), and Centred Detrending Moving Average (CDMA)]. We use three different generators [Fractional Gaussian Noises, and two ways of generating Fractional Brownian Motions]. We find that CDMA has the best performance and DFA is only slightly worse in some situations, while FA performs the worst. In addition, CDMA and DFA are less sensitive to the scaling range than FA. Hence, CDMA and DFA remain “The Methods of Choice” in determining the Hurst index of time series. PMID:23150785
Statistical description of turbulent transport for flux driven toroidal plasmas
NASA Astrophysics Data System (ADS)
Anderson, J.; Imadera, K.; Kishimoto, Y.; Li, J. Q.; Nordman, H.
2017-06-01
A novel methodology to analyze non-Gaussian probability distribution functions (PDFs) of intermittent turbulent transport in global full-f gyrokinetic simulations is presented. In this work, the auto-regressive integrated moving average (ARIMA) model is applied to time series data of intermittent turbulent heat transport to separate noise and oscillatory trends, allowing for the extraction of non-Gaussian features of the PDFs. It was shown that non-Gaussian tails of the PDFs from first principles based gyrokinetic simulations agree with an analytical estimation based on a two fluid model.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Compatible estimators of the components of change for a rotating panel forest inventory design
Francis A. Roesch
2007-01-01
This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Detrending moving average algorithm for multifractals
NASA Astrophysics Data System (ADS)
Gu, Gao-Feng; Zhou, Wei-Xing
2010-07-01
The detrending moving average (DMA) algorithm is a widely used technique to quantify the long-term correlations of nonstationary time series and the long-range correlations of fractal surfaces, which contains a parameter θ determining the position of the detrending window. We develop multifractal detrending moving average (MFDMA) algorithms for the analysis of one-dimensional multifractal measures and higher-dimensional multifractals, which is a generalization of the DMA method. The performance of the one-dimensional and two-dimensional MFDMA methods is investigated using synthetic multifractal measures with analytical solutions for backward (θ=0) , centered (θ=0.5) , and forward (θ=1) detrending windows. We find that the estimated multifractal scaling exponent τ(q) and the singularity spectrum f(α) are in good agreement with the theoretical values. In addition, the backward MFDMA method has the best performance, which provides the most accurate estimates of the scaling exponents with lowest error bars, while the centered MFDMA method has the worse performance. It is found that the backward MFDMA algorithm also outperforms the multifractal detrended fluctuation analysis. The one-dimensional backward MFDMA method is applied to analyzing the time series of Shanghai Stock Exchange Composite Index and its multifractal nature is confirmed.
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
NASA Astrophysics Data System (ADS)
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-10-01
Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.
Hinds, Aynslie M; Bechtel, Brian; Distasio, Jino; Roos, Leslie L; Lix, Lisa M
2018-06-05
Residence in public housing, a subsidized and managed government program, may affect health and healthcare utilization. We compared healthcare use in the year before individuals moved into public housing with usage during their first year of tenancy. We also described trends in use. We used linked population-based administrative data housed in the Population Research Data Repository at the Manitoba Centre for Health Policy. The cohort consisted of individuals who moved into public housing in 2009 and 2010. We counted the number of hospitalizations, general practitioner (GP) visits, specialist visits, emergency department visits, and prescriptions drugs dispensed in the twelve 30-day intervals (i.e., months) immediately preceding and following the public housing move-in date. Generalized linear models with generalized estimating equations tested for a period (pre/post-move-in) by month interaction. Odds ratios (ORs), incident rate ratios (IRRs), and means are reported along with 95% confidence intervals (95% CIs). The cohort included 1942 individuals; the majority were female (73.4%) who lived in low income areas and received government assistance (68.1%). On average, the cohort had more than four health conditions. Over the 24 30-day intervals, the percentage of the cohort that visited a GP, specialist, and an emergency department ranged between 37.0% and 43.0%, 10.0% and 14.0%, and 6.0% and 10.0%, respectively, while the percentage of the cohort hospitalized ranged from 1.0% to 5.0%. Generally, these percentages were highest in the few months before the move-in date and lowest in the few months after the move-in date. The period by month interaction was statistically significant for hospitalizations, GP visits, and prescription drug use. The average change in the odds, rate, or mean was smaller in the post-move-in period than in the pre-move-in period. Use of some healthcare services declined after people moved into public housing; however, the decrease was only observed in the first few months and utilization rebounded. Knowledge of healthcare trends before individuals move in are informative for ensuring the appropriate supports are available to new public housing residents. Further study is needed to determine if decreased healthcare utilization following a move is attributable to decreased access.
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
49 CFR 24.301 - Payment for actual reasonable moving and related expenses.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...
49 CFR 24.301 - Payment for actual reasonable moving and related expenses.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...
49 CFR 24.301 - Payment for actual reasonable moving and related expenses.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...
49 CFR 24.301 - Payment for actual reasonable moving and related expenses.
Code of Federal Regulations, 2011 CFR
2011-10-01
.... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...
49 CFR 24.301 - Payment for actual reasonable moving and related expenses.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...
NASA Astrophysics Data System (ADS)
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Mixed Estimation for a Forest Survey Sample Design
Francis A. Roesch
1999-01-01
Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...
ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.
Lee, Keunbaik; Baek, Changryong; Daniels, Michael J
2017-11-01
In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin
Shrestha, M.S.; Artan, G.A.; Bajracharya, S.R.; Gautam, D.K.; Tokar, S.A.
2011-01-01
In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32000km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC-RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC-RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC-RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction. ?? 2011 The Authors. Journal of Flood Risk Management ?? 2011 The Chartered Institution of Water and Environmental Management.
Bias-adjusted satellite-based rainfall estimates for predicting floods: Narayani Basin
Artan, Guleid A.; Tokar, S.A.; Gautam, D.K.; Bajracharya, S.R.; Shrestha, M.S.
2011-01-01
In Nepal, as the spatial distribution of rain gauges is not sufficient to provide detailed perspective on the highly varied spatial nature of rainfall, satellite-based rainfall estimates provides the opportunity for timely estimation. This paper presents the flood prediction of Narayani Basin at the Devghat hydrometric station (32 000 km2) using bias-adjusted satellite rainfall estimates and the Geospatial Stream Flow Model (GeoSFM), a spatially distributed, physically based hydrologic model. The GeoSFM with gridded gauge observed rainfall inputs using kriging interpolation from 2003 was used for calibration and 2004 for validation to simulate stream flow with both having a Nash Sutcliff Efficiency of above 0.7. With the National Oceanic and Atmospheric Administration Climate Prediction Centre's rainfall estimates (CPC_RFE2.0), using the same calibrated parameters, for 2003 the model performance deteriorated but improved after recalibration with CPC_RFE2.0 indicating the need to recalibrate the model with satellite-based rainfall estimates. Adjusting the CPC_RFE2.0 by a seasonal, monthly and 7-day moving average ratio, improvement in model performance was achieved. Furthermore, a new gauge-satellite merged rainfall estimates obtained from ingestion of local rain gauge data resulted in significant improvement in flood predictability. The results indicate the applicability of satellite-based rainfall estimates in flood prediction with appropriate bias correction.
Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.
2017-07-10
The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies.
Chan Phooi M'ng, Jacinta; Zainudin, Rozaimah
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA') in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA'. The Efficacy Ratio adjusts the AMA' to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA' is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA' are superior to the passive buy-and-hold strategy. Specifically, AMA' outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
On nonstationarity and antipersistency in global temperature series
NASA Astrophysics Data System (ADS)
KäRner, O.
2002-10-01
Statistical analysis is carried out for satellite-based global daily tropospheric and stratospheric temperature anomaly and solar irradiance data sets. Behavior of the series appears to be nonstationary with stationary daily increments. Estimating long-range dependence between the increments reveals a remarkable difference between the two temperature series. Global average tropospheric temperature anomaly behaves similarly to the solar irradiance anomaly. Their daily increments show antipersistency for scales longer than 2 months. The property points at a cumulative negative feedback in the Earth climate system governing the tropospheric variability during the last 22 years. The result emphasizes a dominating role of the solar irradiance variability in variations of the tropospheric temperature and gives no support to the theory of anthropogenic climate change. The global average stratospheric temperature anomaly proceeds like a 1-dim random walk at least up to 11 years, allowing good presentation by means of the autoregressive integrated moving average (ARIMA) models for monthly series.
Dynamical features of hazardous near-Earth objects
NASA Astrophysics Data System (ADS)
Emel'yanenko, V. V.; Naroenkov, S. A.
2015-07-01
We discuss the dynamical features of near-Earth objects moving in dangerous proximity to Earth. We report the computation results for the motions of all observed near-Earth objects over a 600-year-long time period: 300 years in the past and 300 years in the future. We analyze the dynamical features of Earth-approaching objects. In particular, we established that the observed distribution of geocentric velocities of dangerous objects depends on their size. No bodies with geocentric velocities smaller that 5 kms-1 have been found among hazardous objects with absolute magnitudes H <18, whereas 9% of observed objects with H <27 pass near Earth moving at such velocities. On the other hand, we found a tendency for geocentric velocities to increase at H >29. We estimated the distribution of absolute magnitudes of hazardous objects based on our analysis of the data for the asteroids that have passed close to Earth. We inferred the Earth-impact frequencies for objects of different sizes. Impacts of objects with H <18 with Earth occur on average once every 0.53 Myr, and impacts of objects with H <27—once every 130-240 years. We show that currently about 0.1% of all near-Earth objects with diameters greater than 10 m have been discovered. We point out the discrepancies between the estimates of impact rates of Chelyabinsk-type objects, determined from fireball observations and from the data of telescopic asteroid tracking surveys. These estimates can be reconciled assuming that Chelyabinsk-sized asteroids have very low albedos (about 0.02 on average).
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Using Baidu Search Index to Predict Dengue Outbreak in China
NASA Astrophysics Data System (ADS)
Liu, Kangkang; Wang, Tao; Yang, Zhicong; Huang, Xiaodong; Milinovich, Gabriel J.; Lu, Yi; Jing, Qinlong; Xia, Yao; Zhao, Zhengyang; Yang, Yang; Tong, Shilu; Hu, Wenbiao; Lu, Jiahai
2016-12-01
This study identified the possible threshold to predict dengue fever (DF) outbreaks using Baidu Search Index (BSI). Time-series classification and regression tree models based on BSI were used to develop a predictive model for DF outbreak in Guangzhou and Zhongshan, China. In the regression tree models, the mean autochthonous DF incidence rate increased approximately 30-fold in Guangzhou when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 382. When the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 91.8, there was approximately 9-fold increase of the mean autochthonous DF incidence rate in Zhongshan. In the classification tree models, the results showed that when the weekly BSI for DF at the lagged moving average of 1-3 weeks was more than 99.3, there was 89.28% chance of DF outbreak in Guangzhou, while, in Zhongshan, when the weekly BSI for DF at the lagged moving average of 1-5 weeks was more than 68.1, the chance of DF outbreak rose up to 100%. The study indicated that less cost internet-based surveillance systems can be the valuable complement to traditional DF surveillance in China.
Pennington, Audrey Flak; Strickland, Matthew J.; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G.; Hansen, Craig; Darrow, Lyndsey A.
2018-01-01
Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially-resolved estimates of prenatal exposure to mobile source fine particulate matter (PM2.5) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM2.5 from traffic emissions modeled using a research line-source dispersion model (RLINE) at 250 meter resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (rS>0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from −2% to −10% bias). PMID:27966666
Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A
2017-09-01
Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).
Estimating hydraulic properties using a moving-model approach and multiple aquifer tests
Halford, K.J.; Yobbi, D.
2006-01-01
A new method was developed for characterizing geohydrologic columns that extended >600 m deep at sites with as many as six discrete aquifers. This method was applied at 12 sites within the Southwest Florida Water Management District. Sites typically were equipped with multiple production wells, one for each aquifer and one or more observation wells per aquifer. The average hydraulic properties of the aquifers and confining units within radii of 30 to >300 m were characterized at each site. Aquifers were pumped individually and water levels were monitored in stressed and adjacent aquifers during each pumping event. Drawdowns at a site were interpreted using a radial numerical model that extended from land surface to the base of the geohydrologic column and simulated all pumping events. Conceptually, the radial model moves between stress periods and recenters on the production well during each test. Hydraulic conductivity was assumed homogeneous and isotropic within each aquifer and confining unit. Hydraulic property estimates for all of the aquifers and confining units were consistent and reasonable because results from multiple aquifers and pumping events were analyzed simultaneously. Copyright ?? 2005 National Ground Water Association.
Estimating hydraulic properties using a moving-model approach and multiple aquifer tests.
Halford, Keith J; Yobbi, Dann
2006-01-01
A new method was developed for characterizing geohydrologic columns that extended >600 m deep at sites with as many as six discrete aquifers. This method was applied at 12 sites within the Southwest Florida Water Management District. Sites typically were equipped with multiple production wells, one for each aquifer and one or more observation wells per aquifer. The average hydraulic properties of the aquifers and confining units within radii of 30 to >300 m were characterized at each site. Aquifers were pumped individually and water levels were monitored in stressed and adjacent aquifers during each pumping event. Drawdowns at a site were interpreted using a radial numerical model that extended from land surface to the base of the geohydrologic column and simulated all pumping events. Conceptually, the radial model moves between stress periods and recenters on the production well during each test. Hydraulic conductivity was assumed homogeneous and isotropic within each aquifer and confining unit. Hydraulic property estimates for all of the aquifers and confining units were consistent and reasonable because results from multiple aquifers and pumping events were analyzed simultaneously.
An Estimate of North Atlantic Basin Tropical Cyclone Activity for 2008
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2008-01-01
The statistics of North Atlantic basin tropical cyclones for the interval 1945-2007 are examined and estimates are given for the frequencies of occurrence of the number of tropical cyclones, number of hurricanes, number of major hurricanes, number of category 4/5 hurricanes, and number of U.S. land-falling hurricanes for the 2008 hurricane season. Also examined are the variations of peak wind speed, average peak wind speed per storm, lowest pressure, average lowest pressure per storm, recurrence rate and duration of extreme events (El Nino and La Nina), the variation of 10-yr moving averages of parametric first differences, and the association of decadal averages of frequencies of occurrence of North Atlantic basin tropical cyclones against decadal averages of Armagh Observatory, Northern Ireland, annual mean temperature (found to be extremely important for number of tropical cyclones and number of hurricanes). Because the 2008 hurricane season seems destined to be one that is non-El Nino-related and is a post-1995 season, estimates of the frequencies of occurrence for the various subsets of storms should be above long-term averages.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph
2005-12-01
Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.
Time series modelling of increased soil temperature anomalies during long period
NASA Astrophysics Data System (ADS)
Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar
2015-10-01
Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.
Mehta, Amar J.; Kloog, Itai; Zanobetti, Antonella; Coull, Brent A.; Sparrow, David; Vokonas, Pantel; Schwartz, Joel
2014-01-01
Background The underlying mechanisms of the association between ambient temperature and cardiovascular morbidity and mortality are not well understood, particularly for daily temperature variability. We evaluated if daily mean temperature and standard deviation of temperature was associated with heart rate-corrected QT interval (QTc) duration, a marker of ventricular repolarization in a prospective cohort of older men. Methods This longitudinal analysis included 487 older men participating in the VA Normative Aging Study with up to three visits between 2000–2008 (n = 743). We analyzed associations between QTc and moving averages (1–7, 14, 21, and 28 days) of the 24-hour mean and standard deviation of temperature as measured from a local weather monitor, and the 24-hour mean temperature estimated from a spatiotemporal prediction model, in time-varying linear mixed-effect regression. Effect modification by season, diabetes, coronary heart disease, obesity, and age was also evaluated. Results Higher mean temperature as measured from the local monitor, and estimated from the prediction model, was associated with longer QTc at moving averages of 21 and 28 days. Increased 24-hr standard deviation of temperature was associated with longer QTc at moving averages from 4 and up to 28 days; a 1.9°C interquartile range increase in 4-day moving average standard deviation of temperature was associated with a 2.8 msec (95%CI: 0.4, 5.2) longer QTc. Associations between 24-hr standard deviation of temperature and QTc were stronger in colder months, and in participants with diabetes and coronary heart disease. Conclusion/Significance In this sample of older men, elevated mean temperature was associated with longer QTc, and increased variability of temperature was associated with longer QTc, particularly during colder months and among individuals with diabetes and coronary heart disease. These findings may offer insight of an important underlying mechanism of temperature-related cardiovascular morbidity and mortality in an older population. PMID:25238150
Assessing the Efficacy of Adjustable Moving Averages Using ASEAN-5 Currencies
2016-01-01
The objective of this research is to examine the trends in the exchange rate markets of the ASEAN-5 countries (Indonesia (IDR), Malaysia (MYR), the Philippines (PHP), Singapore (SGD), and Thailand (THB)) through the application of dynamic moving average trading systems. This research offers evidence of the usefulness of the time-varying volatility technical analysis indicator, Adjustable Moving Average (AMA′) in deciphering trends in these ASEAN-5 exchange rate markets. This time-varying volatility factor, referred to as the Efficacy Ratio in this paper, is embedded in AMA′. The Efficacy Ratio adjusts the AMA′ to the prevailing market conditions by avoiding whipsaws (losses due, in part, to acting on wrong trading signals, which generally occur when there is no general direction in the market) in range trading and by entering early into new trends in trend trading. The efficacy of AMA′ is assessed against other popular moving-average rules. Based on the January 2005 to December 2014 dataset, our findings show that the moving averages and AMA′ are superior to the passive buy-and-hold strategy. Specifically, AMA′ outperforms the other models for the United States Dollar against PHP (USD/PHP) and USD/THB currency pairs. The results show that different length moving averages perform better in different periods for the five currencies. This is consistent with our hypothesis that a dynamic adjustable technical indicator is needed to cater for different periods in different markets. PMID:27574972
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, M; Rockhill, J; Phillips, M
Purpose: To investigate a spatiotemporally optimal radiotherapy prescription scheme and its potential benefit for glioblastoma (GBM) patients using the proliferation and invasion (PI) glioma model. Methods: Standard prescription for GBM was assumed to deliver 46Gy in 23 fractions to GTV1+2cm margin and additional 14Gy in 7 fractions to GTV2+2cm margin. We simulated the tumor proliferation and invasion in 2D according to the PI glioma model with a moving velocity of 0.029(slow-move), 0.079(average-move), and 0.13(fast-move) mm/day for GTV2 with a radius of 1 and 2cm. For each tumor, the margin around GTV1 and GTV2 was varied to 0–6 cm and 1–3more » cm respectively. Total dose to GTV1 was constrained such that the equivalent uniform dose (EUD) to normal brain equals EUD with the standard prescription. A non-stationary dose policy, where the fractional dose varies, was investigated to estimate the temporal effect of the radiation dose. The efficacy of an optimal prescription scheme was evaluated by tumor cell-surviving fraction (SF), EUD, and the expected survival time. Results: Optimal prescription for the slow-move tumors was to use 3.0(small)-3.5(large) cm margins to GTV1, and 1.5cm margin to GTV2. For the average- and fast-move tumors, it was optimal to use 6.0cm margin for GTV1 suggesting that whole brain therapy is optimal, and then 1.5cm (average-move) and 1.5–3.0cm (fast-move, small-large) margins for GTV2. It was optimal to deliver the boost sequentially using a linearly decreasing fractional dose for all tumors. Optimal prescription led to 0.001–0.465% of the tumor SF resulted from using the standard prescription, and increased tumor EUD by 25.3–49.3% and the estimated survival time by 7.6–22.2 months. Conclusion: It is feasible to optimize a prescription scheme depending on the individual tumor characteristics. A personalized prescription scheme could potentially increase tumor EUD and the expected survival time significantly without increasing EUD to normal brain.« less
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
Kocur, Dušan; Švecová, Mária; Rovňáková, Jana
2013-01-01
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968
Kocur, Dušan; Svecová, Mária; Rovňáková, Jana
2013-09-09
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Queues with Choice via Delay Differential Equations
NASA Astrophysics Data System (ADS)
Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth
Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Successful technical trading agents using genetic programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Othling, Andrew S.; Kelly, John A.; Pryor, Richard J.
2004-10-01
Genetic programming (GP) has proved to be a highly versatile and useful tool for identifying relationships in data for which a more precise theoretical construct is unavailable. In this project, we use a GP search to develop trading strategies for agent based economic models. These strategies use stock prices and technical indicators, such as the moving average convergence/divergence and various exponentially weighted moving averages, to generate buy and sell signals. We analyze the effect of complexity constraints on the strategies as well as the relative performance of various indicators. We also present innovations in the classical genetic programming algorithm thatmore » appear to improve convergence for this problem. Technical strategies developed by our GP algorithm can be used to control the behavior of agents in economic simulation packages, such as ASPEN-D, adding variety to the current market fundamentals approach. The exploitation of arbitrage opportunities by technical analysts may help increase the efficiency of the simulated stock market, as it does in the real world. By improving the behavior of simulated stock markets, we can better estimate the effects of shocks to the economy due to terrorism or natural disasters.« less
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Effect of environmental factors on Internet searches related to sinusitis.
Willson, Thomas J; Lospinoso, Joshua; Weitzel, Erik K; McMains, Kevin C
2015-11-01
Sinusitis significantly affects the population of the United States, exacting direct cost and lost productivity. Patients are likely to search the Internet for information related to their health before seeking care by a healthcare professional. Utilizing data generated from these searches may serve as an epidemiologic surrogate. A retrospective time series analysis was performed. Google search trend data from the Dallas-Fort Worth metro region for the years 2012 and 2013 were collected from www.google.com/trends for terms related to sinusitis based on literature outlining the most important symptoms for diagnosis. Additional terms were selected based on common English language terms used to describe the disease. Twelve months of data from the same time period and location for common pollutants (nitrogen dioxide, ozone, sulfur dioxide, and particulates), pollen and mold counts, and influenza-like illness were also collected. Statistical analysis was performed using Pearson correlation coefficients, and potential search activity predictors were assessed using autoregressive integrated moving average. Pearson correlation was strongest between the terms congestion and influenza-like illness (r=0.615), and sinus and influenza-like illness (r=0.534) and nitrogen dioxide (r=0.487). Autoregressive integrated moving average analysis revealed ozone, influenza-like illness, and nitrogen dioxide levels to be potential predictors for sinus pressure searches, with estimates of 0.118, 0.349, and 0.438, respectively. Nitrogen dioxide was also a potential predictor for the terms congestion and sinus, with estimates of 0.191 and 0.272, respectively. Google search activity for related terms follow the pattern of seasonal influenza-like illness and nitrogen dioxide. These data highlight the epidemiologic potential of this novel surveillance method. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
This paper addresses the general problem of estimating at arbitrary locations the value of an unobserved quantity that varies over space, such as ozone concentration in air or nitrate concentrations in surface groundwater, on the basis of approximate measurements of the quantity ...
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz
2017-01-01
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.
Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz
2017-05-19
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.
Sublithospheric flows in the mantle
NASA Astrophysics Data System (ADS)
Trifonov, V. G.; Sokolov, S. Yu.
2017-11-01
The estimated rates of upper mantle sublithospheric flows in the Hawaii-Emperor Range and Ethiopia-Arabia-Caucasus systems are reported. In the Hawaii-Emperor Range system, calculation is based on motion of the asthenospheric flow and the plate moved by it over the branch of the Central Pacific plume. The travel rate has been determined based on the position of variably aged volcanoes (up to 76 Ma) with respect to the active Kilauea Volcano. As for the Ethiopia-Arabia-Caucasus system, the age of volcanic eruptions (55-2.8 Ma) has been used to estimate the asthenospheric flow from the Ethiopian-Afar superplume in the northern bearing lines. Both systems are characterized by variations in a rate of the upper mantle flows in different epochs from 4 to 12 cm/yr, about 8 cm/yr on average. Analysis of the global seismic tomographic data has made it possible to reveal rock volumes with higher seismic wave velocities under ancient cratons; rocks reach a depth of more than 2000 km and are interpreted as detached fragments of the thickened continental lithosphere. Such volumes on both sides of the Atlantic Ocean were submerged at an average velocity of 0.9-1.0 cm/yr along with its opening. The estimated rates of the mantle flows clarify the deformation properties of the mantle and regulate the numerical models of mantle convection.
Bianca Eskelson; Temesgen Hailemariam; Tara Barrett
2009-01-01
The Forest Inventory and Analysis program (FIA) of the US Forest Service conducts a nationwide annual inventory. One panel (20% or 10% of all plots in the eastern and western United States, respectively) is measured each year. The precision of the estimates for any given year from one panel is low, and the moving average (MA), which is considered to be the default...
Ryberg, Karen R.; Vecchia, Aldo V.; Akyüz, F. Adnan; Lin, Wei
2016-01-01
Historically unprecedented flooding occurred in the Souris River Basin of Saskatchewan, North Dakota and Manitoba in 2011, during a longer term period of wet conditions in the basin. In order to develop a model of future flows, there is a need to evaluate effects of past multidecadal climate variability and/or possible climate change on precipitation. In this study, tree-ring chronologies and historical precipitation data in a four-degree buffer around the Souris River Basin were analyzed to develop regression models that can be used for predicting long-term variations of precipitation. To focus on longer term variability, 12-year moving average precipitation was modeled in five subregions (determined through cluster analysis of measures of precipitation) of the study area over three seasons (November–February, March–June and July–October). The models used multiresolution decomposition (an additive decomposition based on powers of two using a discrete wavelet transform) of tree-ring chronologies from Canada and the US and seasonal 12-year moving average precipitation based on Adjusted and Homogenized Canadian Climate Data and US Historical Climatology Network data. Results show that precipitation varies on long-term (multidecadal) time scales of 16, 32 and 64 years. Past extended pluvial and drought events, which can vary greatly with season and subregion, were highlighted by the models. Results suggest that the recent wet period may be a part of natural variability on a very long time scale.
NASA Astrophysics Data System (ADS)
Jia, Song; Xu, Tian-he; Sun, Zhang-zhen; Li, Jia-jing
2017-02-01
UT1-UTC is an important part of the Earth Orientation Parameters (EOP). The high-precision predictions of UT1-UTC play a key role in practical applications of deep space exploration, spacecraft tracking and satellite navigation and positioning. In this paper, a new prediction method with combination of Gray Model (GM(1, 1)) and Autoregressive Integrated Moving Average (ARIMA) is developed. The main idea is as following. Firstly, the UT1-UTC data are preprocessed by removing the leap second and Earth's zonal harmonic tidal to get UT1R-TAI data. Periodic terms are estimated and removed by the least square to get UT2R-TAI. Then the linear terms of UT2R-TAI data are modeled by the GM(1, 1), and the residual terms are modeled by the ARIMA. Finally, the UT2R-TAI prediction can be performed based on the combined model of GM(1, 1) and ARIMA, and the UT1-UTC predictions are obtained by adding the corresponding periodic terms, leap second correction and the Earth's zonal harmonic tidal correction. The results show that the proposed model can be used to predict UT1-UTC effectively with higher middle and long-term (from 32 to 360 days) accuracy than those of LS + AR, LS + MAR and WLS + MAR.
Gauging the Nearness and Size of Cycle Maximum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2003-01-01
A simple method for monitoring the nearness and size of conventional cycle maximum for an ongoing sunspot cycle is examined. The method uses the observed maximum daily value and the maximum monthly mean value of international sunspot number and the maximum value of the 2-mo moving average of monthly mean sunspot number to effect the estimation. For cycle 23, a maximum daily value of 246, a maximum monthly mean of 170.1, and a maximum 2-mo moving average of 148.9 were each observed in July 2000. Taken together, these values strongly suggest that conventional maximum amplitude for cycle 23 would be approx. 124.5, occurring near July 2002 +/-5 mo, very close to the now well-established conventional maximum amplitude and occurrence date for cycle 23-120.8 in April 2000.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovy, Jo; Hogg, David W., E-mail: jo.bovy@nyu.ed
2010-07-10
The velocity distribution of nearby stars ({approx}<100 pc) contains many overdensities or 'moving groups', clumps of comoving stars, that are inconsistent with the standard assumption of an axisymmetric, time-independent, and steady-state Galaxy. We study the age and metallicity properties of the low-velocity moving groups based on the reconstruction of the local velocity distribution in Paper I of this series. We perform stringent, conservative hypothesis testing to establish for each of these moving groups whether it could conceivably consist of a coeval population of stars. We conclude that they do not: the moving groups are neither trivially associated with their eponymousmore » open clusters nor with any other inhomogeneous star formation event. Concerning a possible dynamical origin of the moving groups, we test whether any of the moving groups has a higher or lower metallicity than the background population of thin disk stars, as would generically be the case if the moving groups are associated with resonances of the bar or spiral structure. We find clear evidence that the Hyades moving group has higher than average metallicity and weak evidence that the Sirius moving group has lower than average metallicity, which could indicate that these two groups are related to the inner Lindblad resonance of the spiral structure. Further, we find weak evidence that the Hercules moving group has higher than average metallicity, as would be the case if it is associated with the bar's outer Lindblad resonance. The Pleiades moving group shows no clear metallicity anomaly, arguing against a common dynamical origin for the Hyades and Pleiades groups. Overall, however, the moving groups are barely distinguishable from the background population of stars, raising the likelihood that the moving groups are associated with transient perturbations.« less
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Geohydrologic reconnaissance of the upper Potomac River basin
Trainer, Frank W.; Watkins, Frank A.
1975-01-01
The upper Potomac River basin, in the central Appalachian region in Pennsylvania, Maryland, Virginia, and West Virginia, is a humid temperate region of diverse fractured rocks. Three geohydrologic terranes, which underlie large parts of the basin, are described in terms of their aquifer characteristics and of the magnitude and duration of their base runoff: (1) fractured rock having a thin regolith, (2) fractured rock having a thick regolith, and (3) carbonate rock. Crystalline rock in the mountainous part of the Blue Ridge province and shale with tight sandstone in the folded Appalachians are covered with thin regolith. Water is stored in and moves through fairly unmodified fractures. Average transmissivity (T) is estimated to be 150 feet squared per day, and average storage coefficient (S), 0.005. Base runoff declines rapidly from its high levels during spring and is poorly sustained during the summer season of high evapotranspiration. The rocks in this geohydrologic terrane are the least effective in the basin for the development of water supplies and as a source of dry-weather streamflow. Crystalline and sedimentary rocks in the Piedmont province and in the lowland part of the Blue Ridge province are covered with thick regolith. Water is stored in and moves through both the regolith and the underlying fractured rock. Estimated average values for aquifer characteristics are T, 200 feet squared per day, and S, 0.01. Base runoff is better sustained in this terrane than in the thin-regolith terrane and on the average .is about twice as great. Carbonate rock, in which fractures have been widened selectively by solution, especially near streams, has estimated average aquifer characteristics of T, 500 feet squared per day, and S, 0.03-0.04. This rock is the most effective in the basin in terms of water supply and base runoff. Where its fractures have not been widened by solution, the carbonate rock is a fractured-rock aquifer much like the noncarbonate rock. At low values the frequency of specific capacities of wells is much the same in all rocks in the basin, but high values of specific capacity are as much as 10 times more frequent in carbonate rock than in noncarbonate rock. Nearly all the large springs and high-capacity wells in the basin are in carbonate rock. Base runoff from the carbonate rock is better sustained during dry weather and on the average is about three times as great as base runoff from fractured rock having a thin regolith. The potential role of these water-bearing terranes in water management probably lies in the local development of large water supplies from the carbonate rock and in the possible manipulation of underground storage for such purposes as providing space for artificial recharge of ground water and providing ground water to be used for the augmentation of low streamflow. The chief water-quality problems in the basin--acidic mine-drainage water in the western part of the basin, local highly mineralized ground water, and the high nitrate content of ground water in some of the densely populated parts of the basin--would probably have little adverse affect on the use of ground water for low-flow augmentation.
An Information Retrieval Approach for Robust Prediction of Road Surface States.
Park, Jae-Hyung; Kim, Kwanho
2017-01-28
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.
An Information Retrieval Approach for Robust Prediction of Road Surface States
Park, Jae-Hyung; Kim, Kwanho
2017-01-01
Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859
Bao, Guanqun; Mi, Liang; Geng, Yishuang; Zhou, Mingda; Pahlavan, Kaveh
2014-01-01
Wireless Capsule Endoscopy (WCE) is progressively emerging as one of the most popular non-invasive imaging tools for gastrointestinal (GI) tract inspection. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease. For the WCE, the position of the capsule is defined as the linear distance it is away from certain fixed anatomical landmarks. In order to measure the distance the capsule has traveled, a precise knowledge of how fast the capsule moves is urgently needed. In this paper, we present a novel computer vision based speed estimation technique that is able to extract the speed of the endoscopic capsule by analyzing the displacements between consecutive frames. The proposed approach is validated using a virtual testbed as well as the real endoscopic images. Results show that the proposed method is able to precisely estimate the speed of the endoscopic capsule with 93% accuracy on average, which enhances the localization accuracy of the WCE to less than 2.49 cm.
Wisconsin's forest resources, 2005
Charles, H. (Hobie) Perry; Gary J. Brand
2006-01-01
The annual forest inventory of Wisconsin continues, and this document reports 2001-05 moving averages for most variables and comparisons between 2000 and 2005 for growth, removals, and mortality. Summary resource tables can be generated through the Forest Inventory Mapmaker website at http://ncrs2.fs.fed.us/4801/fiadb/index. htm. Estimates from this inventory show a...
Opportunities to improve monitoring of temporal trends with FIA panel data
Raymond Czaplewski; Michael Thompson
2009-01-01
The Forest Inventory and Analysis (FIA) Program of the Forest Service, Department of Agriculture, is an annual monitoring system for the entire United States. Each year, an independent "panel" of FIA field plots is measured. To improve accuracy, FIA uses the "Moving Average" or "Temporally Indifferent" method to combine estimates from...
2016-11-22
Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e
On Estimating End-to-End Network Path Properties
NASA Technical Reports Server (NTRS)
Allman, Mark; Paxson, Vern
1999-01-01
The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements per-formed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for are reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.
Needle Steering in Biological Tissue using Ultrasound-based Online Curvature Estimation
Moreira, Pedro; Patil, Sachin; Alterovitz, Ron; Misra, Sarthak
2014-01-01
Percutaneous needle insertions are commonly performed for diagnostic and therapeutic purposes. Accurate placement of the needle tip is important to the success of many needle procedures. The current needle steering systems depend on needle-tissue-specific data, such as maximum curvature, that is unavailable prior to an interventional procedure. In this paper, we present a novel three-dimensional adaptive steering method for flexible bevel-tipped needles that is capable of performing accurate tip placement without previous knowledge about needle curvature. The method steers the needle by integrating duty-cycled needle steering, online curvature estimation, ultrasound-based needle tracking, and sampling-based motion planning. The needle curvature estimation is performed online and used to adapt the path and duty cycling. We evaluated the method using experiments in a homogenous gelatin phantom, a two-layer gelatin phantom, and a biological tissue phantom composed of a gelatin layer and in vitro chicken tissue. In all experiments, virtual obstacles and targets move in order to represent the disturbances that might occur due to tissue deformation and physiological processes. The average targeting error using our new adaptive method is 40% lower than using the conventional non-adaptive duty-cycled needle steering method. PMID:26229729
Simultaneous Estimation of Electromechanical Modes and Forced Oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, Jim; Pierre, John W.; Martin, Russell
Over the past several years, great strides have been made in the effort to monitor the small-signal stability of power systems. These efforts focus on estimating electromechanical modes, which are a property of the system that dictate how generators in different parts of the system exchange energy. Though the algorithms designed for this task are powerful and important for reliable operation of the power system, they are susceptible to severe bias when forced oscillations are present in the system. Forced oscillations are fundamentally different from electromechanical oscillations in that they are the result of a rogue input to the system,more » rather than a property of the system itself. To address the presence of forced oscillations, the frequently used AutoRegressive Moving Average (ARMA) model is adapted to include sinusoidal inputs, resulting in the AutoRegressive Moving Average plus Sinusoid (ARMA+S) model. From this model, a new Two-Stage Least Squares algorithm is derived to incorporate the forced oscillations, thereby enabling the simultaneous estimation of the electromechanical modes and the amplitude and phase of the forced oscillations. The method is validated using simulated power system data as well as data obtained from the western North American power system (wNAPS) and Eastern Interconnection (EI).« less
Documentation of a spreadsheet for time-series analysis and drawdown estimation
Halford, Keith J.
2006-01-01
Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.
The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century.
Ochiai, Derek H; Adib, Farshad
2015-04-01
Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE.
The Spin Move: A Reliable and Cost-Effective Gowning Technique for the 21st Century
Ochiai, Derek H.; Adib, Farshad
2015-01-01
Operating room efficiency (ORE) and utilization are considered one of the most crucial components of quality improvement in every hospital. We introduced a new gowning technique that could optimize ORE. The Spin Move quickly and efficiently wraps a surgical gown around the surgeon's body. This saves the operative time expended through the traditional gowning techniques. In the Spin Move, while the surgeon is approaching the scrub nurse, he or she uses the left heel as the fulcrum. The torque, which is generated by twisting the right leg around the left leg, helps the surgeon to close the gown as quickly and safely as possible. From 2003 to 2012, the Spin Move was performed in 1,725 consecutive procedures with no complication. The estimated average time was 5.3 and 7.8 seconds for the Spin Move and traditional gowning, respectively. The estimated time saving for the senior author during this period was 71.875 minutes. Approximately 20,000 orthopaedic surgeons practice in the United States. If this technique had been used, 23,958 hours could have been saved. The money saving could have been $14,374,800.00 (23,958 hours × $600/operating room hour) during the past 10 years. The Spin Move is easy to perform and reproducible. It saves operating room time and increases ORE. PMID:26052490
NASA Astrophysics Data System (ADS)
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
This report focuses on the methodology for estimating growth in NR engine populations as used in the MOVES201X-NONROAD emission inventory model. MOVES NR growth rates start with base year engine populations and estimate growth in the populations of NR engines, while applying cons...
Impact of the Illinois Seat Belt Use Law on Accidents, Deaths, and Injuries.
ERIC Educational Resources Information Center
Rock, Steven M.
1992-01-01
The impact of the 1985 Illinois seat belt law is explored using Box-Jenkins Auto-Regressive, Integrated Moving Averages (ARIMA) techniques and monthly accident statistical data from the state department of transportation for January-July 1990. A conservative estimate is that the law provides benefits of $15 million per month in Illinois. (SLD)
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
Consistent and efficient processing of ADCP streamflow measurements
Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.
Application of image processing to calculate the number of fish seeds using raspberry-pi
NASA Astrophysics Data System (ADS)
Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.
2018-03-01
Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.
Software simulations of the detection of rapidly moving asteroids by a charge-coupled device
NASA Astrophysics Data System (ADS)
McMillan, R. S.; Stoll, C. P.
1982-10-01
A rendezvous of an unmanned probe to an earth-approaching asteroid has been given a high priority in the planning of interplanetary missions for the 1990s. Even without a space mission, much could be learned about the history of asteroids and comet nuclei if more information were available concerning asteroids with orbits which cross or approach the orbit of earth. It is estimated that the total number of earth-crossers accessible to ground-based survey telescopes should be approximately 1000. However, in connection with the small size and rapid angular motion expected of many of these objects an average of only one object is discovered per year. Attention is given to the development of the software necessary to distinguish such rapidly moving asteroids from stars and noise in continuously scanned CCD exposures of the night sky. Model and input parameters are considered along with detector sensitivity, aspects of minimum detectable displacement, and the point-spread function of the CCD.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Damage evaluation by a guided wave-hidden Markov model based method
NASA Astrophysics Data System (ADS)
Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin
2016-02-01
Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.
NASA Astrophysics Data System (ADS)
Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan
2018-06-01
In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.
Huang, Jiyan; Zhang, Ying; Luo, Shan
2017-12-15
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars
Zhang, Ying; Luo, Shan
2017-01-01
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727
Dynamics of actin-based movement by Rickettsia rickettsii in vero cells.
Heinzen, R A; Grieshaber, S S; Van Kirk, L S; Devin, C J
1999-08-01
Actin-based motility (ABM) is a virulence mechanism exploited by invasive bacterial pathogens in the genera Listeria, Shigella, and Rickettsia. Due to experimental constraints imposed by the lack of genetic tools and their obligate intracellular nature, little is known about rickettsial ABM relative to Listeria and Shigella ABM systems. In this study, we directly compared the dynamics and behavior of ABM of Rickettsia rickettsii and Listeria monocytogenes. A time-lapse video of moving intracellular bacteria was obtained by laser-scanning confocal microscopy of infected Vero cells synthesizing beta-actin coupled to green fluorescent protein (GFP). Analysis of time-lapse images demonstrated that R. rickettsii organisms move through the cell cytoplasm at an average rate of 4.8 +/- 0.6 micrometer/min (mean +/- standard deviation). This speed was 2.5 times slower than that of L. monocytogenes, which moved at an average rate of 12.0 +/- 3.1 micrometers/min. Although rickettsiae moved more slowly, the actin filaments comprising the actin comet tail were significantly more stable, with an average half-life approximately three times that of L. monocytogenes (100.6 +/- 19.2 s versus 33.0 +/- 7.6 s, respectively). The actin tail associated with intracytoplasmic rickettsiae remained stationary in the cytoplasm as the organism moved forward. In contrast, actin tails of rickettsiae trapped within the nucleus displayed dramatic movements. The observed phenotypic differences between the ABM of Listeria and Rickettsia may indicate fundamental differences in the mechanisms of actin recruitment and polymerization.
A spline-based non-linear diffeomorphism for multimodal prostate registration.
Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice
2012-08-01
This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.
Past observable dynamics of a continuously monitored qubit
NASA Astrophysics Data System (ADS)
García-Pintos, Luis Pedro; Dressel, Justin
2017-12-01
Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
Influence of exposure differences on city-to-city heterogeneity ...
Multi-city population-based epidemiological studies have observed heterogeneity between city-specific fine particulate matter (PM2.5)-mortality effect estimates. These studies typically use ambient monitoring data as a surrogate for exposure leading to potential exposure misclassification. The level of exposure misclassification can differ by city affecting the observed health effect estimate. The objective of this analysis is to evaluate whether previously developed residential infiltration-based city clusters can explain city-to-city heterogeneity in PM2.5 mortality risk estimates. In a prior paper 94 cities were clustered based on residential infiltration factors (e.g. home age/size, prevalence of air conditioning (AC)), resulting in 5 clusters. For this analysis, the association between PM2.5 and all-cause mortality was first determined in 77 cities across the United States for 2001–2005. Next, a second stage analysis was conducted evaluating the influence of cluster assignment on heterogeneity in the risk estimates. Associations between a 2-day (lag 0–1 days) moving average of PM2.5 concentrations and non-accidental mortality were determined for each city. Estimated effects ranged from −3.2 to 5.1% with a pooled estimate of 0.33% (95% CI: 0.13, 0.53) increase in mortality per 10 μg/m3 increase in PM2.5. The second stage analysis determined that cluster assignment was marginally significant in explaining the city-to-city heterogeneity. The health effe
NASA Astrophysics Data System (ADS)
Uysal, Fatih; Kilinc, Enes; Kurt, Huseyin; Celik, Erdal; Dugenci, Muharrem; Sagiroglu, Selami
2017-08-01
Thermoelectric generators (TEGs) convert heat into electrical energy. These energy-conversion systems do not involve any moving parts and are made of thermoelectric (TE) elements connected electrically in a series and thermally in parallel; however, they are currently not suitable for use in regular operations due to their low efficiency levels. In order to produce high-efficiency TEGs, there is a need for highly heat-resistant thermoelectric materials (TEMs) with an improved figure of merit ( ZT). Production and test methods used for TEMs today are highly expensive. This study attempts to estimate the Seebeck coefficient of TEMs by using the values of existing materials in the literature. The estimation is made within an artificial neural network (ANN) based on the amount of doping and production methods. Results of the estimations show that the Seebeck coefficient can approximate the real values with an average accuracy of 94.4%. In addition, ANN has detected that any change in production methods is followed by a change in the Seebeck coefficient.
Recovering the 3d Pose and Shape of Vehicles from Stereo Images
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2018-05-01
The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82 % correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4°.
Kinesin-microtubule interactions during gliding assays under magnetic force
NASA Astrophysics Data System (ADS)
Fallesen, Todd L.
Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Mills, Patrick C.; Healy, Richard W.
1993-01-01
The movement of water and tritium through the unsaturated zone was studied at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois, from 1981 to 1985. Water and tritium movement occurred in an annual, seasonally timed cycle; recharge to the saturated zone generally occurred in the spring and early summer. Mean annual precipitation (1982-85) was 871 mm (millimeters); mean annual recharge to the disposal trenches (July 1982 through June 1984) was estimated to be 107 mm. Average annual tritium flux below the study trenches was estimated to be 3.4 mCi/yr (millicuries per year). Site geology, climate, and waste-disposal practices influenced the spatial and temporal variability of water and tritium movement. Of the components of the water budget, evapotranspiration contributed most to the temporal variability of water and tritium movement. Disposal trenches are constructed in complexly layered glacial and postglacial deposits that average 17 m (meters) in thickness and overlie a thick sequence of Pennsylvanian shale. The horizontal saturated hydraulic conductivity of the clayey-silt to sand-sized glacial and postglacial deposits ranges from 4.8x10 -1 to 3.4x10 4 mm/d (millimeters per day). A 120-m-long horizontal tunnel provided access for hydrologic measurements and collection of sediment and water samples from the unsaturated and saturated geologic deposits below four disposal trenches. Trench-cover and subtrench deposits were monitored with soil-moisture tensiometers, vacuum and gravity lysimeters, piezometers, and a nuclear soil-moisture gage. A cross-sectional, numerical ground-water-flow model was used to simulate water movement in the variably saturated geologic deposits in the tunnel area. Concurrent studies at the site provided water-budget data for estimating recharge to the disposal trenches. Vertical water movement directly above the trenches was impeded by a zone of compaction within the clayey-silt trench covers. Water entered the trenches primarily at the trench edges where the compacted zone was absent and the cover was relatively thin. Collapse holes in the trench covers that resulted from inadequate compaction of wastes within the trenches provided additional preferential pathways for surface-water drainage into the trenches; drainage into one collapse hole during a rainstorm was estimated to be 1,700 L (liters). Till deposits near trench bases induced lateral water and tritium movement. Limited temporal variation in water movement and small flow gradients (relative to the till deposits) were detected in the unsaturated subtrench sand deposit; maximum gradients during the spring recharge period averaged 1.62 mm/mm (millimeter per millimeter). Time-of-travel of water moving from the trench covers to below the trenches was estimated to be as rapid as 41 days (assuming individual water molecules move this distance in one recharge cycle). Tritium concentrations in water from the unsaturated zone ranged from 200 (background) to 10,000,000 pCi/L (picocuries per liter). Tritium concentrations generally were higher below trench bases (averaging 91,000 pCi/L) than below intertrench sediments (averaging 3,300 pCi/L), and in the subtrench Toulon Member of the Glasford Formation (sand) (averaging 110,000 pCi/L) than in the Hulick Till Member of the Glasford Formation (clayey silt) (averaging 59,000 pCi/L). Average subtrench tritium concentration increased from 28,000 to 100,000 pCi/L during the study period. Within the trench covers, there was a strong seasonal trend in tritium concentrations; the highest concentrations occurred in late summer when soil-moisture contents were at a minimum. Subtrench tritium movement occurred in association with the annual cycle of water movement, as well as independently of the cycle, in apparent response to continuous water movement through the subtrench sand deposits and to the deterioration of trench-waste containers. The increase in concen
Mills, Patrick C.; Healy, R.W.
1991-01-01
The movement of water and tritium through the unsaturated zone was studied at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois, from 1981 to 1985. Water and tritium movement occurred in an annual, seasonally timed cycle; recharge to the saturated zone generally occurred in the spring and early summer. Mean annual precipitation (1982-85) was 871 millimeters; mean annual recharge to the disposal trenches (July 1982 through June 1984) was estimated to be 107 millimeters. Average annual tritium flux below the study trenches was estimated to be 3.4 millicuries per year. Site geology, climate, and waste-disposal practices influenced the spatial and temporal variability of water and tritium movement. Of the components of the water budget, evapotranspiration contributed most to the temporal variability of water and tritium movement. Disposal trenches are constructed in complexly layered glacial and postglacial deposits that average 17 meters in thickness and overlie a thick sequence of Pennsylvanian shale. The horizontal saturated hydraulic conductivity of the clayey-silt to sand-sized glacial and postglacial deposits ranges from 4.8x10^-1 to 3.4x10^4 millimeters per day. A 120-meter-long horizontal tunnel provided access for hydrologic measurements and collection of sediment and water samples from the unsaturated and saturated geologic deposits below four disposal trenches. Trench-cover and subtrench deposits were monitored with soil-moisture tensiometers, vacuum and gravity lysimeters, piezometers, and a nuclear soil-moisture gage. A cross-sectional, numerical ground-water-flow model was used to simulate water movement in the variably saturated geologic deposits in the tunnel area. Concurrent studies at the site provided water-budget data for estimating recharge to the disposal trenches. Vertical water movement directly above the trenches was impeded by a zone of compaction within the clayey-silt trench covers. Water entered the trenches primarily at the trench edges where the compacted zone was absent and the cover was relatively thin. Collapse holes in the trench covers that resulted from inadequate compaction of wastes within the trenches provided additional preferential pathways for surface-water drainage into the trenches; drainage into one collapse hole during a rainstorm was estimated to be 1,700 liters. Till deposits near trench bases induced lateral water and tritium movement. Limited temporal variation in water movement and small flow gradients (relative to the till deposits) were detected in the unsaturated subtrench sand deposit; maximum gradients during the spring recharge period averaged 1.62 millimeters per millimeter. Time-of-travel of water moving from the trench covers to below the trenches was estimated to be as rapid as 41 days (assuming individual water molecules move this distance in one recharge cycle). Tritium concentrations in water from the unsaturated zone ranged from 200 (background) to 10,000,000 pCi/L (picocuries per liter). Tritium concentrations generally were higher below trench bases (averaging 91,000 pCi/L) than below intertrench sediments (averaging 3,300 pCi/L), and in the subtrench Toulon Member of the Glasford Formation (sand) (averaging 110,000 pCi/L) than in the Hulick Till Member of the Glasford Formation (clayey silt) (averaging 59,000 pCi/L). Average subtrench tritium concentration increased from 28,000 to 100,000 pCi/L during the study period. Within the trench covers, there was a strong seasonal trend in tritium concentrations; the highest concentrations occurred in late summer when soil-moisture contents were at a minimum. Subtrench tritium movement occurred in association with the annual cycle of water movement, as well as independently of the cycle, in apparent response to continuous water movement through the subtrench sand deposits and to the deterioration of trench-waste containers. The increase in concentrations of tritium with incre
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Arena, Umberto; Ardolino, Filomena; Di Gregorio, Fabrizio
2015-07-01
An attributional life cycle analysis (LCA) was developed to compare the environmental performances of two waste-to-energy (WtE) units, which utilize the predominant technologies among those available for combustion and gasification processes: a moving grate combustor and a vertical shaft gasifier coupled with direct melting. The two units were assumed to be fed with the same unsorted residual municipal waste, having a composition estimated as a European average. Data from several plants in operation were processed by means of mass and energy balances, and on the basis of the flows and stocks of materials and elements inside and throughout the two units, as provided by a specific substance flow analysis. The potential life cycle environmental impacts related to the operations of the two WtE units were estimated by means of the Impact 2002+ methodology. They indicate that both the technologies have sustainable environmental performances, but those of the moving grate combustion unit are better for most of the selected impact categories. The analysis of the contributions from all the stages of each specific technology suggests where improvements in technological solutions and management criteria should be focused to obtain further and remarkable environmental improvements. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experimental comparisons of hypothesis test and moving average based combustion phase controllers.
Gao, Jinwu; Wu, Yuhu; Shen, Tielong
2016-11-01
For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Effects of improved spatial and temporal modeling of on-road vehicle emissions.
Lindhjem, Christian E; Pollack, Alison K; DenBleyker, Allison; Shaw, Stephanie L
2012-04-01
Numerous emission and air quality modeling studies have suggested the need to accurately characterize the spatial and temporal variations in on-road vehicle emissions. The purpose of this study was to quantify the impact that using detailed traffic activity data has on emission estimates used to model air quality impacts. The on-road vehicle emissions are estimated by multiplying the vehicle miles traveled (VMT) by the fleet-average emission factors determined by road link and hour of day. Changes in the fraction of VMT from heavy-duty diesel vehicles (HDDVs) can have a significant impact on estimated fleet-average emissions because the emission factors for HDDV nitrogen oxides (NOx) and particulate matter (PM) are much higher than those for light-duty gas vehicles (LDGVs). Through detailed road link-level on-road vehicle emission modeling, this work investigated two scenarios for better characterizing mobile source emissions: (1) improved spatial and temporal variation of vehicle type fractions, and (2) use of Motor Vehicle Emission Simulator (MOVES2010) instead of MOBILE6 exhaust emission factors. Emissions were estimated for the Detroit and Atlanta metropolitan areas for summer and winter episodes. The VMT mix scenario demonstrated the importance of better characterizing HDDV activity by time of day, day of week, and road type. More HDDV activity occurs on restricted access road types on weekdays and at nonpeak times, compared to light-duty vehicles, resulting in 5-15% higher NOx and PM emission rates during the weekdays and 15-40% lower rates on weekend days. Use of MOVES2010 exhaust emission factors resulted in increases of more than 50% in NOx and PM for both HDDVs and LDGVs, relative to MOBILE6. Because LDGV PM emissions have been shown to increase with lower temperatures, the most dramatic increase from MOBILE6 to MOVES2010 emission rates occurred for PM2.5 from LDGVs that increased 500% during colder wintertime conditions found in Detroit, the northernmost city modeled.
Estimating the Length of the North Atlantic Basin Hurricane Season
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2012-01-01
For the interval 1945-2011, the length of the hurricane season in the North Atlantic basin averages about 130 +/- 42 days (the +/-1 standard deviation interval), having a range of 47 to 235 days. Runs-testing reveals that the annual length of season varies nonrandomly at the 5% level of significance. In particular, its trend, as described using 10-yr moving averages, generally has been upward since about 1979, increasing from about 113 to 157 days (in 2003). Based on annual values, one finds a highly statistically important inverse correlation at the 0.1% level of significance between the length of season and the occurrence of the first storm day of the season. For the 2012 hurricane season, based on the reported first storm day of May 19, 2012 (i.e., DOY = 140), the inferred preferential regression predicts that the length of the current season likely will be about 173 +/- 23 days, suggesting that it will end about November 8 +/- 23 days, with only about a 5% chance that it will end either before about September 23, 2012 or after about December 24, 2012.
Granger causality for state-space models
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Seth, Anil K.
2015-04-01
Granger causality has long been a prominent method for inferring causal interactions between stochastic variables for a broad range of complex physical systems. However, it has been recognized that a moving average (MA) component in the data presents a serious confound to Granger causal analysis, as routinely performed via autoregressive (AR) modeling. We solve this problem by demonstrating that Granger causality may be calculated simply and efficiently from the parameters of a state-space (SS) model. Since SS models are equivalent to autoregressive moving average models, Granger causality estimated in this fashion is not degraded by the presence of a MA component. This is of particular significance when the data has been filtered, downsampled, observed with noise, or is a subprocess of a higher dimensional process, since all of these operations—commonplace in application domains as diverse as climate science, econometrics, and the neurosciences—induce a MA component. We show how Granger causality, conditional and unconditional, in both time and frequency domains, may be calculated directly from SS model parameters via solution of a discrete algebraic Riccati equation. Numerical simulations demonstrate that Granger causality estimators thus derived have greater statistical power and smaller bias than AR estimators. We also discuss how the SS approach facilitates relaxation of the assumptions of linearity, stationarity, and homoscedasticity underlying current AR methods, thus opening up potentially significant new areas of research in Granger causal analysis.
Littman, Alyson J; Damschroder, Laura J; Verchinina, Lilia; Lai, Zongshan; Kim, Hyungjin Myra; Hoerster, Katherine D; Klingaman, Elizabeth A; Goldberg, Richard W; Owen, Richard R; Goodrich, David E
2015-01-01
The objective was to determine whether obesity screening and weight management program participation and outcomes are equitable for individuals with serious mental illness (SMI) and depressive disorder (DD) compared to those without SMI/DD in Veterans Health Administration (VHA), the largest integrated US health system, which requires obesity screening and offers weight management to all in need. We used chart-reviewed, clinical and administrative VHA data from fiscal years 2010-2012 to estimate obesity screening and participation in the VHA's weight management program (MOVE!) across groups. Six- and 12-month weight changes in MOVE! participants were estimated using linear mixed models adjusted for confounders. Compared to individuals without SMI/DD, individuals with SMI or DD were less frequently screened for obesity (94%-94.7% vs. 95.7%) but had greater participation in MOVE! (10.1%-10.4% vs. 7.4%). MOVE! participants with SMI or DD lost approximately 1 lb less at 6 months. At 12 months, average weight loss for individuals with SMI or neither SMI/DD was comparable (-3.5 and -3.3 lb, respectively), but individuals with DD lost less weight (mean=-2.7 lb). Disparities in obesity screening and treatment outcomes across mental health diagnosis groups were modest. However, participation in MOVE! was low for every group, which limits population impact. Published by Elsevier Inc.
A Case Study to Improve Emergency Room Patient Flow at Womack Army Medical Center
2009-06-01
use just the previous month, moving average 2-month period ( MA2 ) uses the average from the previous two months, moving average 3-month period (MA3...ED prior to discharge by provider) MA2 /MA3/MA4 - moving averages of 2-4 months in length MAD - mean absolute deviation (measure of accuracy for
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
Buckingham-Jeffery, Elizabeth; Morbey, Roger; House, Thomas; Elliot, Alex J; Harcourt, Sally; Smith, Gillian E
2017-05-19
As service provision and patient behaviour varies by day, healthcare data used for public health surveillance can exhibit large day of the week effects. These regular effects are further complicated by the impact of public holidays. Real-time syndromic surveillance requires the daily analysis of a range of healthcare data sources, including family doctor consultations (called general practitioners, or GPs, in the UK). Failure to adjust for such reporting biases during analysis of syndromic GP surveillance data could lead to misinterpretations including false alarms or delays in the detection of outbreaks. The simplest smoothing method to remove a day of the week effect from daily time series data is a 7-day moving average. Public Health England developed the working day moving average in an attempt also to remove public holiday effects from daily GP data. However, neither of these methods adequately account for the combination of day of the week and public holiday effects. The extended working day moving average was developed. This is a further data-driven method for adding a smooth trend curve to a time series graph of daily healthcare data, that aims to take both public holiday and day of the week effects into account. It is based on the assumption that the number of people seeking healthcare services is a combination of illness levels/severity and the ability or desire of patients to seek healthcare each day. The extended working day moving average was compared to the seven-day and working day moving averages through application to data from two syndromic indicators from the GP in-hours syndromic surveillance system managed by Public Health England. The extended working day moving average successfully smoothed the syndromic healthcare data by taking into account the combined day of the week and public holiday effects. In comparison, the seven-day and working day moving averages were unable to account for all these effects, which led to misleading smoothing curves. The results from this study make it possible to identify trends and unusual activity in syndromic surveillance data from GP services in real-time independently of the effects caused by day of the week and public holidays, thereby improving the public health action resulting from the analysis of these data.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
On-line algorithms for forecasting hourly loads of an electric utility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vemuri, S.; Huang, W.L.; Nelson, D.J.
A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less
A new image segmentation method based on multifractal detrended moving average analysis
NASA Astrophysics Data System (ADS)
Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le
2015-08-01
In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
Full-chain health impact assessment of traffic-related air pollution and childhood asthma.
Khreis, Haneen; de Hoogh, Kees; Nieuwenhuijsen, Mark J
2018-05-01
Asthma is the most common chronic disease in children. Traffic-related air pollution (TRAP) may be an important exposure contributing to its development. In the UK, Bradford is a deprived city suffering from childhood asthma rates higher than national and regional averages and TRAP is of particular concern to the local communities. We estimated the burden of childhood asthma attributable to air pollution and specifically TRAP in Bradford. Air pollution exposures were estimated using a newly developed full-chain exposure assessment model and an existing land-use regression model (LUR). We estimated childhood population exposure to NO x and, by conversion, NO 2 at the smallest census area level using a newly developed full-chain model knitting together distinct traffic (SATURN), vehicle emission (COPERT) and atmospheric dispersion (ADMS-Urban) models. We compared these estimates with measurements and estimates from ESCAPE's LUR model. Using the UK incidence rate for childhood asthma, meta-analytical exposure-response functions, and estimates from the two exposure models, we estimated annual number of asthma cases attributable to NO 2 and NO x in Bradford, and annual number of asthma cases specifically attributable to traffic. The annual average census tract levels of NO 2 and NO x estimated using the full-chain model were 15.41 and 25.68 μg/m 3 , respectively. On average, 2.75 μg/m 3 NO 2 and 4.59 μg/m 3 NO x were specifically contributed by traffic, without minor roads and cold starts. The annual average census tract levels of NO 2 and NO x estimated using the LUR model were 21.93 and 35.60 μg/m 3 , respectively. The results indicated that up to 687 (or 38% of all) annual childhood asthma cases in Bradford may be attributable to air pollution. Up to 109 cases (6%) and 219 cases (12%) may be specifically attributable to TRAP, with and without minor roads and cold starts, respectively. This is the first study undertaking full-chain health impact assessment of TRAP and childhood asthma in a disadvantaged population with public concern about TRAP. It further adds to scarce literature exploring the impact of different exposure assessments. In conservative estimates, air pollution and TRAP are estimated to cause a large, but largely preventable, childhood asthma burden. Future progress with childhood asthma requires a move beyond the prevalent disease control-based approach toward asthma prevention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Anderson, Kimberly R.; Anthony, T. Renée
2014-01-01
An understanding of how particles are inhaled into the human nose is important for developing samplers that measure biologically relevant estimates of exposure in the workplace. While previous computational mouth-breathing investigations of particle aspiration have been conducted in slow moving air, nose breathing still required exploration. Computational fluid dynamics was used to estimate nasal aspiration efficiency for an inhaling humanoid form in low velocity wind speeds (0.1–0.4 m s−1). Breathing was simplified as continuous inhalation through the nose. Fluid flow and particle trajectories were simulated over seven discrete orientations relative to the oncoming wind (0, 15, 30, 60, 90, 135, 180°). Sensitivities of the model simplification and methods were assessed, particularly the placement of the recessed nostril surface and the size of the nose. Simulations identified higher aspiration (13% on average) when compared to published experimental wind tunnel data. Significant differences in aspiration were identified between nose geometry, with the smaller nose aspirating an average of 8.6% more than the larger nose. Differences in fluid flow solution methods accounted for 2% average differences, on the order of methodological uncertainty. Similar trends to mouth-breathing simulations were observed including increasing aspiration efficiency with decreasing freestream velocity and decreasing aspiration with increasing rotation away from the oncoming wind. These models indicate nasal aspiration in slow moving air occurs only for particles <100 µm. PMID:24665111
Estimation of inhaled airborne particle number concentration by subway users in Seoul, Korea.
Kim, Minhae; Park, Sechan; Namgung, Hyeong-Gyu; Kwon, Soon-Bark
2017-12-01
Exposure to airborne particulate matter (PM) causes several diseases in the human body. The smaller particles, which have relatively large surface areas, are actually more harmful to the human body since they can penetrate deeper parts of the lungs or become secondary pollutants by bonding with other atmospheric pollutants, such as nitrogen oxides. The purpose of this study is to present the number of PM inhaled by subway users as a possible reference material for any analysis of the hazards to the human body arising from the inhalation of such PM. Two transfer stations in Seoul, Korea, which have the greatest number of users, were selected for this study. For 0.3-0.422 μm PM, particle number concentration (PNC) was highest outdoors but decreased as the tester moved deeper underground. On the other hand, the PNC between 1 and 10 μm increased as the tester moved deeper underground and showed a high number concentration inside the subway train as well. An analysis of the particles to which subway users are actually exposed to (inhaled particle number), using particle concentration at each measurement location, the average inhalation rate of an adult, and the average stay time at each location, all showed that particles sized 0.01-0.422 μm are mostly inhaled from the outdoor air whereas particles sized 1-10 μm are inhaled as the passengers move deeper underground. Based on these findings, we expect that the inhaled particle number of subway users can be used as reference data for an evaluation of the hazards to health caused by PM inhalation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Class III correction using an inter-arch spring-loaded module
2014-01-01
Background A retrospective study was conducted to determine the cephalometric changes in a group of Class III patients treated with the inter-arch spring-loaded module (CS2000®, Dynaflex, St. Ann, MO, USA). Methods Thirty Caucasian patients (15 males, 15 females) with an average pre-treatment age of 9.6 years were treated consecutively with this appliance and compared with a control group of subjects from the Bolton-Brush Study who were matched in age, gender, and craniofacial morphology to the treatment group. Lateral cephalograms were taken before treatment and after removal of the CS2000® appliance. The treatment effects of the CS2000® appliance were calculated by subtracting the changes due to growth (control group) from the treatment changes. Results All patients were improved to a Class I dental arch relationship with a positive overjet. Significant sagittal, vertical, and angular changes were found between the pre- and post-treatment radiographs. With an average treatment time of 1.3 years, the maxillary base moved forward by 0.8 mm, while the mandibular base moved backward by 2.8 mm together with improvements in the ANB and Wits measurements. The maxillary incisor moved forward by 1.3 mm and the mandibular incisor moved forward by 1.0 mm. The maxillary molar moved forward by 1.0 mm while the mandibular molar moved backward by 0.6 mm. The average overjet correction was 3.9 mm and 92% of the correction was due to skeletal contribution and 8% was due to dental contribution. The average molar correction was 5.2 mm and 69% of the correction was due to skeletal contribution and 31% was due to dental contribution. Conclusions Mild to moderate Class III malocclusion can be corrected using the inter-arch spring-loaded appliance with minimal patient compliance. The overjet correction was contributed by forward movement of the maxilla, backward and downward movement of the mandible, and proclination of the maxillary incisors. The molar relationship was corrected by mesialization of the maxillary molars, distalization of the mandibular molars together with a rotation of the occlusal plane. PMID:24934153
SEM Based CARMA Time Series Modeling for Arbitrary N.
Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C
2018-01-01
This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.
Lu, Mang; Gu, Li-Peng; Xu, Wen-Hao
2013-01-01
In this study, a novel suspended ceramsite was prepared, which has high strength, optimum density (close to water), and high porosity. The ceramsite was used to feed a moving-bed biofilm reactor (MBBR) system with an anaerobic-aerobic (A/O) arrangement to treat petroleum refinery wastewater for simultaneous removal of chemical oxygen demand (COD) and ammonium. The hydraulic retention time (HRT) of the anaerobic-aerobic MBBR system was varied from 72 to 18 h. The anaerobic-aerobic system had a strong tolerance to shock loading. Compared with the professional emission standard of China, the effluent concentrations of COD and NH3-N in the system could satisfy grade I at HRTs of 72 and 36 h, and grade II at HRT of 18 h. The average sludge yield of the anaerobic reactor was estimated to be 0.0575 g suspended solid/g CODremoved. This work demonstrated that the anaerobic-aerobic MBBR system using the suspended ceramsite as bio-carrier could be applied to achieving high wastewater treatment efficiency.
Integrating WEPP into the WEPS infrastructure
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...
Motion tracing system for ultrasound guided HIFU
NASA Astrophysics Data System (ADS)
Xiao, Xu; Jiang, Tingyi; Corner, George; Huang, Zhihong
2017-03-01
One main limitation in HIFU treatment is the abdominal movement in liver and kidney caused by respiration. The study has set up a tracking model which mainly compromises of a target carrying box and a motion driving balloon. A real-time B-mode ultrasound guidance method suitable for tracking of the abdominal organ motion in 2D was established and tested. For the setup, the phantoms mimicking moving organs are carefully prepared with agar surrounding round-shaped egg-white as the target of focused ultrasound ablation. Physiological phantoms and animal tissues are driven moving reciprocally along the main axial direction of the ultrasound image probe with slightly motion perpendicular to the axial direction. The moving speed and range could be adjusted by controlling the inflation and deflation speed and amount of the balloon driven by a medical ventilator. A 6-DOF robotic arm was used to position the focused ultrasound transducer. The overall system was trying to estimate to simulate the actual movement caused by human respiration. HIFU ablation experiments using phantoms and animal organs were conducted to test the tracking effect. Ultrasound strain elastography was used to post estimate the efficiency of the tracking algorithms and system. In moving state, the axial size of the lesion (perpendicular to the movement direction) are averagely 4mm, which is one third larger than the lesion got when the target was not moving. This presents the possibility of developing a low-cost real-time method of tracking organ motion during HIFU treatment in liver or kidney.
High-Fidelity Simulations of Moving and Flexible Airfoils at Low Reynolds Numbers (Postprint)
2010-02-01
1 hour per response, including the time for reviewing instructions, searching existing data sources, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...phased-averaged structures for both values of Reynolds number are found to be in good agreement with the experimental data . Finally, the effect of
Traffic-Related Air Pollution, Blood Pressure, and Adaptive Response of Mitochondrial Abundance.
Zhong, Jia; Cayir, Akin; Trevisi, Letizia; Sanchez-Guerra, Marco; Lin, Xinyi; Peng, Cheng; Bind, Marie-Abèle; Prada, Diddier; Laue, Hannah; Brennan, Kasey J M; Dereix, Alexandra; Sparrow, David; Vokonas, Pantel; Schwartz, Joel; Baccarelli, Andrea A
2016-01-26
Exposure to black carbon (BC), a tracer of vehicular-traffic pollution, is associated with increased blood pressure (BP). Identifying biological factors that attenuate BC effects on BP can inform prevention. We evaluated the role of mitochondrial abundance, an adaptive mechanism compensating for cellular-redox imbalance, in the BC-BP relationship. At ≥ 1 visits among 675 older men from the Normative Aging Study (observations=1252), we assessed daily BP and ambient BC levels from a stationary monitor. To determine blood mitochondrial abundance, we used whole blood to analyze mitochondrial-to-nuclear DNA ratio (mtDNA/nDNA) using quantitative polymerase chain reaction. Every standard deviation increase in the 28-day BC moving average was associated with 1.97 mm Hg (95% confidence interval [CI], 1.23-2.72; P<0.0001) and 3.46 mm Hg (95% CI, 2.06-4.87; P<0.0001) higher diastolic and systolic BP, respectively. Positive BC-BP associations existed throughout all time windows. BC moving averages (5-day to 28-day) were associated with increased mtDNA/nDNA; every standard deviation increase in 28-day BC moving average was associated with 0.12 standard deviation (95% CI, 0.03-0.20; P=0.007) higher mtDNA/nDNA. High mtDNA/nDNA significantly attenuated the BC-systolic BP association throughout all time windows. The estimated effect of 28-day BC moving average on systolic BP was 1.95-fold larger for individuals at the lowest mtDNA/nDNA quartile midpoint (4.68 mm Hg; 95% CI, 3.03-6.33; P<0.0001), in comparison with the top quartile midpoint (2.40 mm Hg; 95% CI, 0.81-3.99; P=0.003). In older adults, short-term to moderate-term ambient BC levels were associated with increased BP and blood mitochondrial abundance. Our findings indicate that increased blood mitochondrial abundance is a compensatory response and attenuates the cardiac effects of BC. © 2015 American Heart Association, Inc.
Josso, Nicolas F; Ioana, Cornel; Mars, Jérôme I; Gervaise, Cédric
2010-12-01
Acoustic channel properties in a shallow water environment with moving source and receiver are difficult to investigate. In fact, when the source-receiver relative position changes, the underwater environment causes multipath and Doppler scale changes on the transmitted signal over low-to-medium frequencies (300 Hz-20 kHz). This is the result of a combination of multiple paths propagation, source and receiver motions, as well as sea surface motion or water column fast changes. This paper investigates underwater acoustic channel properties in a shallow water (up to 150 m depth) and moving source-receiver conditions using extracted time-scale features of the propagation channel model for low-to-medium frequencies. An average impulse response of one transmission is estimated using the physical characteristics of propagation and the wideband ambiguity plane. Since a different Doppler scale should be considered for each propagating signal, a time-warping filtering method is proposed to estimate the channel time delay and Doppler scale attributes for each propagating path. The proposed method enables the estimation of motion-compensated impulse responses, where different Doppler scaling factors are considered for the different time delays. It was validated for channel profiles using real data from the BASE'07 experiment conducted by the North Atlantic Treaty Organization Undersea Research Center in the shallow water environment of the Malta Plateau, South Sicily. This paper provides a contribution to many field applications including passive ocean tomography with unknown natural sources position and movement. Another example is active ocean tomography where sources motion enables to rapidly cover one operational area for rapid environmental assessment and hydrophones may be drifting in order to avoid additional flow noise.
Robust Semi-Active Ride Control under Stochastic Excitation
2014-01-01
broad classes of time-series models which are of practical importance; the Auto-Regressive (AR) models, the Integrated (I) models, and the Moving...Average (MA) models [12]. Combinations of these models result in autoregressive moving average (ARMA) and autoregressive integrated moving average...Down Up 4) Down Down These four cases can be written in compact form as: (20) Where is the Heaviside
Joint level-set and spatio-temporal motion detection for cell segmentation.
Boukari, Fatima; Makrogiannis, Sokratis
2016-08-10
Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Wendy; Ren, Lei, E-mail: lei.ren@duke.edu; Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
Purpose: The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. Methods and Materials: One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of themore » deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Results: Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR = 20. For patient data, average tracking errors were less than 2 mm in all directions for all patients. Conclusions: Preliminary studies demonstrated the feasibility of generating real-time VC-MRI for on-board localization of moving targets in radiation therapy.« less
On the dynamics of jellyfish locomotion via 3D particle tracking velocimetry
NASA Astrophysics Data System (ADS)
Piper, Matthew; Kim, Jin-Tae; Chamorro, Leonardo P.
2016-11-01
The dynamics of jellyfish (Aurelia aurita) locomotion is experimentally studied via 3D particle tracking velocimetry. 3D locations of the bell tip are tracked over 1.5 cycles to describe the jellyfish path. Multiple positions of the jellyfish bell margin are initially tracked in 2D from four independent planes and individually projected in 3D based on the jellyfish path and geometrical properties of the setup. A cubic spline interpolation and the exponentially weighted moving average are used to estimate derived quantities, including velocity and acceleration of the jellyfish locomotion. We will discuss distinctive features of the jellyfish 3D motion at various swimming phases, and will provide insight on the 3D contraction and relaxation in terms of the locomotion, the steadiness of the bell margin eccentricity, and local Reynolds number based on the instantaneous mean diameter of the bell.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Air pollution and daily mortality: A new approach to an old problem
NASA Astrophysics Data System (ADS)
Lipfert, Frederick W.; Murray, Christian J.
2012-08-01
Many time-series studies find associations between acute health effects and ambient air quality under current conditions. However, few such studies link mortality with morbidity to provide rational bases for improving public health. This paper describes a research project that developed and validated a new modeling approach directly addressing changes in life expectancies and the prematurity of deaths associated with transient changes in air quality. We used state-space modeling and Kalman filtering of elderly Philadelphia mortality counts from 1974-88 to estimate the size of the population at highest risk of imminent death. This subpopulation appears stable over time but is sensitive to season and to environmental factors: ambient temperature, ozone, and total suspended particulate matter (TSP), as an index of airborne particles in this demonstration of methodology. This population at extreme risk averages fewer than 0.1% of the elderly. By considering successively longer lags or moving averages of TSP, we find that cumulative short-term effects on entry to the at-risk pool tend to level off and decrease as periods of exposure longer than a few days are considered. These estimated environmental effects on the elderly are consistent with previous analyses using conventional time-series methods. However, this new model suggests that such environmentally linked deaths comprise only about half of the subjects whose frailty is associated with environmental factors. The average life expectancy of persons in the at-risk pool is estimated to be 5-7 days, which may be reduced by less than one day by environmental effects. These results suggest that exposures leading up to severe acute frailty and subsequent risk of imminent death may be more important from a public health perspective than those directly associated with subsequent mortality.
Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.
La, Hung Manh; Sheng, Weihua
2013-04-01
In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.
Essays in the California electricity reserves markets
NASA Astrophysics Data System (ADS)
Metaxoglou, Konstantinos
This dissertation examines inefficiencies in the California electricity reserves markets. In Chapter 1, I use the information released during the investigation of the state's electricity crisis of 2000 and 2001 by the Federal Energy Regulatory Commission to diagnose allocative inefficiencies. Building upon the work of Wolak (2000), I calculate a lower bound for the sellers' price-cost margins using the inverse elasticities of their residual demand curves. The downward bias in my estimates stems from the fact that I don't account for the hierarchical substitutability of the reserve types. The margins averaged at least 20 percent for the two highest quality types of reserves, regulation and spinning, generating millions of dollars in transfers to a handful of sellers. I provide evidence that the deviations from marginal cost pricing were due to the markets' high concentration and a principal-agent relationship that emerged from their design. In Chapter 2, I document systematic differences between the markets' day- and hour-ahead prices. I use a high-dimensional vector moving average model to estimate the premia and conduct correct inferences. To obtain exact maximum likelihood estimates of the model, I employ the EM algorithm that I develop in Chapter 3. I uncover significant day-ahead premia, which I attribute to market design characteristics too. On the demand side, the market design established a principal-agent relationship between the markets' buyers (principal) and their supervisory authority (agent). The agent had very limited incentives to shift reserve purchases to the lower priced hour-ahead markets. On the supply side, the market design raised substantial entry barriers by precluding purely speculative trading and by introducing a complicated code of conduct that induced uncertainty about which actions were subject to regulatory scrutiny. In Chapter 3, I introduce a state-space representation for vector autoregressive moving average models that enables exact maximum likelihood estimation using the EM algorithm. Moreover, my algorithm uses only analytical expressions; it requires the Kalman filter and a fixed-interval smoother in the E step and least squares-type regression in the M step. In contrast, existing maximum likelihood estimation methods require numerical differentiation, both for univariate and multivariate models.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
NASA Astrophysics Data System (ADS)
Ahmed, F.; Teferle, F. N.; Bingley, R. M.
2012-04-01
Since September 2011 the University of Luxembourg in collaboration with the University of Nottingham has been setting up two near real-time processing systems for ground-based GNSS data for the provision of zenith total delay (ZTD) and integrated water vapour (IWV) estimates. Both systems are based on Bernese v5.0, use the double-differenced network processing strategy and operate with a 1-hour (NRT1h) and 15-minutes (NRT15m) update cycle. Furthermore, the systems follow the approach of the E-GVAP METO and IES2 systems in that the normal equations for the latest data are combined with those from the previous four updates during the estimation of the ZTDs. NRT1h currently takes the hourly data from over 130 GNSS stations in Europe whereas NRT15m is primarily using the real-time streams of EUREF-IP. Both networks include additional GNSS stations in Luxembourg, Belgium and France. The a priori station coordinates for all of these stem from a moving average computed over the last 20 to 50 days and are based on the precise point positioning processing strategy. In this study we present the first ZTD and IWV estimates obtained from the NRT1h and NRT15m systems in development at the University of Luxembourg. In a preliminary evaluation we compare their performance to the IES2 system at the University of Nottingham and find the IWV estimates to agree at the sub-millimetre level.
Structural equation modeling of the inflammatory response to traffic air pollution
Baja, Emmanuel S.; Schwartz, Joel D.; Coull, Brent A.; Wellenius, Gregory A.; Vokonas, Pantel S.; Suh, Helen H.
2015-01-01
Several epidemiological studies have reported conflicting results on the effect of traffic-related pollutants on markers of inflammation. In a Bayesian framework, we examined the effect of traffic pollution on inflammation using structural equation models (SEMs). We studied measurements of C-reactive protein (CRP), soluble vascular cell adhesion molecule-1 (sVCAM-1), and soluble intracellular adhesion molecule-1 (sICAM-1) for 749 elderly men from the Normative Aging Study. Using repeated measures SEMs, we fit a latent variable for traffic pollution that is reflected by levels of black carbon, carbon monoxide, nitrogen monoxide and nitrogen dioxide to estimate its effect on a latent variable for inflammation that included sICAM-1, sVCAM-1 and CRP. Exposure periods were assessed using 1-, 2-, 3-, 7-, 14- and 30-day moving averages previsit. We compared our findings using SEMs with those obtained using linear mixed models. Traffic pollution was related to increased inflammation for 3-, 7-, 14- and 30-day exposure periods. An inter-quartile range increase in traffic pollution was associated with a 2.3% (95% posterior interval (PI): 0.0–4.7%) increase in inflammation for the 3-day moving average, with the most significant association observed for the 30-day moving average (23.9%; 95% PI: 13.9–36.7%). Traffic pollution adversely impacts inflammation in the elderly. SEMs in a Bayesian framework can comprehensively incorporate multiple pollutants and health outcomes simultaneously in air pollution–cardiovascular epidemiological studies. PMID:23232970
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)
2000-01-01
Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.
A stochastic post-processing method for solar irradiance forecasts derived from NWPs models
NASA Astrophysics Data System (ADS)
Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.
2010-09-01
Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.
Hidden Markov analysis of mechanosensitive ion channel gating.
Khan, R Nazim; Martinac, Boris; Madsen, Barry W; Milne, Robin K; Yeo, Geoffrey F; Edeson, Robert O
2005-02-01
Patch clamp data from the large conductance mechanosensitive channel (MscL) in E. coli was studied with the aim of developing a strategy for statistical analysis based on hidden Markov models (HMMs) and determining the number of conductance levels of the channel, together with mean current, mean dwell time and equilibrium probability of occupancy for each level. The models incorporated state-dependent white noise and moving average adjustment for filtering, with maximum likelihood parameter estimates obtained using an EM (expectation-maximisation) based iteration. Adjustment for filtering was included as it could be expected that the electronic filter used in recording would have a major effect on obviously brief intermediate conductance level sojourns. Preliminary data analysis revealed that the brevity of intermediate level sojourns caused difficulties in assignment of data points to levels as a result of over-estimation of noise variances. When reasonable constraints were placed on these variances using the better determined noise variances for the closed and fully open levels, idealisation anomalies were eliminated. Nevertheless, simulations suggested that mean sojourn times for the intermediate levels were still considerably over-estimated, and that recording bandwidth was a major limitation; improved results were obtained with higher bandwidth data (10 kHz sampled at 25 kHz). The simplest model consistent with these data had four open conductance levels, intermediate levels being approximately 20%, 51% and 74% of fully open. The mean lifetime at the fully open level was about 1 ms; estimates for the three intermediate levels were 54-92 micros, probably still over-estimates.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal
Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang
2015-01-01
An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665
AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.
Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang
2015-10-23
An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.
John R. Brooks
2007-01-01
A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
Jung, Sungwoon; Kim, Jounghwa; Kim, Jeongsoo; Hong, Dahee; Park, Dongjoo
2017-04-01
The objective of this study is to estimate the vehicle kilometer traveled (VKT) and on-road emissions using the traffic volume in urban. We estimated two VKT; one is based on registered vehicles and the other is based on traffic volumes. VKT for registered vehicles was 2.11 times greater than that of the applied traffic volumes because each VKT estimation method is different. Therefore, we had to define the inner VKT is moved VKT inner in urban to compare two values. Also, we focused on freight modes because these are discharged much air pollutant emissions. From analysis results, we found middle and large trucks registered in other regions traveled to target city in order to carry freight, target city has included many industrial and logistics areas. Freight is transferred through the harbors, large logistics centers, or via locations before being moved to the final destination. During this process, most freight is moved by middle and large trucks, and trailers rather than small trucks for freight import and export. Therefore, these trucks from other areas are inflow more than registered vehicles. Most emissions from diesel trucks had been overestimated in comparison to VKT from applied traffic volumes in target city. From these findings, VKT is essential based on traffic volume and travel speed on road links in order to estimate accurately the emissions of diesel trucks in target city. Our findings support the estimation of the effect of on-road emissions on urban air quality in Korea. Copyright © 2016. Published by Elsevier B.V.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Industrial Based Migration in India. A Case Study of Dumdum "Dunlop Industrial Zone"
NASA Astrophysics Data System (ADS)
Das, Biplab; Bandyopadhyay, Aditya; Sen, Jayashree
2012-10-01
Migration is a very important part in our present society. Basically Millions of people moved during the industrial revolution. Some simply moved from a village to a town in the hope of finding work whilst others moved from one country to another in search of a better way of life. The main reason for moving home during the 19th century was to find work. On one hand this involved migration from the countryside to the growing industrial cities, on the other it involved rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage,birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family.Migration was not just people moving out of the country, it also invloved a lot of people moving into Britain. In the 1840's Ireland suffered a terrible famine. Faced with a massive cost of feeding the starving population many local landowners paid for labourers to emigrate.There was a shift away from agriculturally based rural dwelling towards urban habitation to meet the mass demand for labour that new industry required. There became great regional differences in population levels and in the structure of their demography. This was due to rates of migration, emigration, and the social changes that were drastically affecting factors such as marriage, birth and death rates. These social changes taking place as a result of capitalism had far ranging affects, such as lowering the average age of marriage and increasing the size of the average family. There is n serious disagreement as to the extent of the population changes that occurred but one key question that always arouses debate is that of whether an expanding population resulted in economic growth or vice versa, i.e. was industrialization a catalyst for population growth? A clear answer is difficult to decipher as the two variables are so closely and fundamentally interlinked, but it seems that both factors provided impetus for each otherís take off. If anything, population and economic growth were complimentary towards one another rather than simply being causative factors.
Parameter estimation of an ARMA model for river flow forecasting using goal programming
NASA Astrophysics Data System (ADS)
Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene
2006-11-01
SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Accounting for seasonal patterns in syndromic surveillance data for outbreak detection.
Burr, Tom; Graves, Todd; Klamann, Richard; Michalak, Sarah; Picard, Richard; Hengartner, Nicolas
2006-12-04
Syndromic surveillance (SS) can potentially contribute to outbreak detection capability by providing timely, novel data sources. One SS challenge is that some syndrome counts vary with season in a manner that is not identical from year to year. Our goal is to evaluate the impact of inconsistent seasonal effects on performance assessments (false and true positive rates) in the context of detecting anomalous counts in data that exhibit seasonal variation. To evaluate the impact of inconsistent seasonal effects, we injected synthetic outbreaks into real data and into data simulated from each of two models fit to the same real data. Using real respiratory syndrome counts collected in an emergency department from 2/1/94-5/31/03, we varied the length of training data from one to eight years, applied a sequential test to the forecast errors arising from each of eight forecasting methods, and evaluated their detection probabilities (DP) on the basis of 1000 injected synthetic outbreaks. We did the same for each of two corresponding simulated data sets. The less realistic, nonhierarchical model's simulated data set assumed that "one season fits all," meaning that each year's seasonal peak has the same onset, duration, and magnitude. The more realistic simulated data set used a hierarchical model to capture violation of the "one season fits all" assumption. This experiment demonstrated optimistic bias in DP estimates for some of the methods when data simulated from the nonhierarchical model was used for DP estimation, thus suggesting that at least for some real data sets and methods, it is not adequate to assume that "one season fits all." For the data we analyze, the "one season fits all " assumption is violated, and DP performance claims based on simulated data that assume "one season fits all," for the forecast methods considered, except for moving average methods, tend to be optimistic. Moving average methods based on relatively short amounts of training data are competitive on all three data sets, but are particularly competitive on the real data and on data from the hierarchical model, which are the two data sets that violate the "one season fits all" assumption.
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
25 CFR 700.173 - Average net earnings of business or farm.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false Average net earnings of business or farm. 700.173 Section... PROCEDURES Moving and Related Expenses, Temporary Emergency Moves § 700.173 Average net earnings of business or farm. (a) Computing net earnings. For purposes of this subpart, the average annual net earnings of...
Use of streamflow data to estimate base flowground-water recharge for Wisconsin
Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.
2007-01-01
The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Environmental Assessment: Installation Development at Sheppard Air Force Base, Texas
2007-05-01
column, or in topographic depressions. Water is then utilized by plants and is respired, or it moves slowly into groundwater and/or eventually to surface...water bodies where it slowly moves through the hydrologic cycle. Removal of vegetation decreases infiltration into the soil column and thereby...School District JP-4 jet propulsion fuel 4 kts knots Ldn Day- Night Average Sound Level Leq equivalent noise level Lmax maximum sound level lb pound
NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.
Hydrology of Eagle Creek Basin and effects of groundwater pumping on streamflow, 1969-2009
Matherne, Anne Marie; Myers, Nathan C.; McCoy, Kurt J.
2010-01-01
Urban and resort development and drought conditions have placed increasing demands on the surface-water and groundwater resources of the Eagle Creek Basin, in southcentral New Mexico. The Village of Ruidoso, New Mexico, obtains 60-70 percent of its water from the Eagle Creek Basin. The village drilled four production wells on Forest Service land along North Fork Eagle Creek; three of the four wells were put into service in 1988 and remain in use. Local citizens have raised questions as to the effects of North Fork well pumping on flow in Eagle Creek. In response to these concerns, the U.S. Geological Survey, in cooperation with the Village of Ruidoso, conducted a hydrologic investigation from 2007 through 2009 of the potential effect of the North Fork well field on streamflow in North Fork Eagle Creek. Mean annual precipitation for the period of record (1942-2008) at the Ruidoso climate station is 22.21 inches per year with a range from 12.27 inches in 1970 to 34.81 inches in 1965. Base-flow analysis indicates that the 1970-80 mean annual discharge, direct runoff, and base flow were 2,260, 1,440, and 819 acre-ft/yr, respectively, and for 1989-2008 were 1,290, 871, and 417 acre-ft/yr, respectively. These results indicate that mean annual discharge, direct runoff, and base flow were less during the 1989-2008 period than during the 1970-80 period. Mean annual precipitation volume for the study area was estimated to be 12,200 acre-feet. Estimated annual evapotranspiration for the study area ranged from 8,730 to 8,890 acre-feet. Estimated annual basin yield for the study area was 3,390 acre-ft or about 28 percent of precipitation. On the basis of basin-yield computations, annual recharge was estimated to be 1,950 acre-ft, about 16 percent of precipitation. Using a chloride mass-balance method, groundwater recharge over the study area was estimated to average 490 acre-ft, about 4.0 percent of precipitation. Because the North Fork wells began pumping in 1988, 1969-80 represents the pre-groundwater-pumping period, and 1988-2009 represents the groundwater-pumping period. The 5-year moving average for precipitation at the Ruidoso climate station shows years of below-average precipitation during both time periods, but no days of zero flow were recorded for the 11-year period 1970-80 and no-flow days were recorded in 11 of 20 years for the 1988-2009 period. View report for unabridged abstract.
Monitoring the Migrations of Wild Snake River Spring/Summer Chinook Salmon Juveniles, 2007-2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achord, Stephen; Sandford, Benjamin P.; Hockersmith, Eric E.
2009-07-09
This report provides results from an ongoing project to monitor the migration behavior and survival of wild juvenile spring/summer Chinook salmon in the Snake River Basin. Data reported is from detections of PIT tagged fish during late summer 2007 through mid-2008. Fish were tagged in summer 2007 by the National Marine Fisheries Service (NMFS) in Idaho and by the Oregon Department of Fish and Wildlife (ODFW) in Oregon. Our analyses include migration behavior and estimated survival of fish at instream PIT-tag monitors and arrival timing and estimated survival to Lower Granite Dam. Principal results from tagging and interrogation during 2007-2008more » are: (1) In July and August 2007, we PIT tagged and released 7,390 wild Chinook salmon parr in 12 Idaho streams or sample areas. (2) Overall observed mortality from collection, handling, tagging, and after a 24-hour holding period was 1.4%. (3) Of the 2,524 Chinook salmon parr PIT tagged and released in Valley Creek in summer 2007, 218 (8.6%) were detected at two instream PIT-tag monitoring systems in lower Valley Creek from late summer 2007 to the following spring 2008. Of these, 71.6% were detected in late summer/fall, 11.9% in winter, and 16.5% in spring. Estimated parr-to-smolt survival to Lower Granite Dam was 15.5% for the late summer/fall group, 48.0% for the winter group, and 58.5% for the spring group. Based on detections at downstream dams, the overall efficiency of VC1 (upper) or VC2 (lower) Valley Creek monitors for detecting these fish was 21.1%. Using this VC1 or VC2 efficiency, an estimated 40.8% of all summer-tagged parr survived to move out of Valley Creek, and their estimated survival from that point to Lower Granite Dam was 26.5%. Overall estimated parr-to-smolt survival for all summer-tagged parr from this stream at the dam was 12.1%. Development and improvement of instream PIT-tag monitoring systems continued throughout 2007 and 2008. (4) Testing of PIT-tag antennas in lower Big Creek during 2007-2008 showed these antennas (and anchoring method) are not adequate to withstand high spring flows in this drainage. Future plans involve removing these antennas before high spring flows. (5) At Little Goose Dam in 2008, length and/or weight were taken on 505 recaptured fish from 12 Idaho stream populations. Fish had grown an average of 40.1 mm in length and 10.6 g in weight over an average of 288 d. Their mean condition factor declined from 1.25 at release (parr) to 1.05 at recapture (smolt). (6) Mean release lengths for detected fish were significantly larger than for fish not detected the following spring and summer (P < 0.0001). (7) Fish that migrated through Lower Granite Dam in April and May were significantly larger at release than fish that migrated after May (P < 0.0001) (only 12 fish migrated after May). (8) In 2008, peak detections at Lower Granite Dam of parr tagged during summer 2007 (from the 12 stream populations in Idaho and 4 streams in Oregon) occurred during moderate flows of 87.5 kcfs on 7 May and high flows of 197.3 kcfs on 20 May. The 10th, 50th, and 90th percentile passage occurred on 30 April, 11 May, and 23 May, respectively. (9) In 2007-2008, estimated parr-to-smolt survival to Lower Granite Dam for Idaho and Oregon streams (combined) averaged 19.4% (range 6.2-38.4% depending on stream of origin). In Idaho streams the estimated parr-to-smolt survival averaged 21.0%. This survival was the second highest since 1993 for Idaho streams. Relative parr densities were lower in 2007 (2.4 parr/100 m2) than in all previous years since 2000. In 2008, we observed low-to-moderate flows prior to mid-May and relatively cold weather conditions throughout the spring migration season. These conditions moved half of the fish through Lower Granite Dam prior to mid-May; then high flows moved 50 to 90% of the fish through the dam in only 12 days. Clearly, complex interrelationships of several factors drive the annual migrational timing of the stocks.« less
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Video-Assisted Thoracic Surgical Lobectomy for Lung Cancer: Description of a Learning Curve.
Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Cao, Shiqi; Cao, Yongke
2017-07-01
Video-assisted thoracic surgical (VATS) lobectomy is gaining popularity in the treatment of lung cancer. The aim of this study is to investigate the learning curve of VATS lobectomy by using multidimensional methods and to compare the learning curve groups with respect to perioperative clinical outcomes. We retrospectively reviewed a prospective database to identify 67 consecutive patients who underwent VATS lobectomy for lung cancer by a single surgeon. The learning curve was analyzed by using moving average and the cumulative sum (CUSUM) method. With the moving average and CUSUM analyses for the operation time, patients were stratified into two groups, with chronological order defining early and late experiences. Perioperative clinical outcomes were compared between the two learning curve groups. According to the moving average method, the peak point for operation time occurred at the 26th case. The CUSUM method also showed the operation time peak point at the 26th case. When results were compared between early- and late-experience periods, the operation time, duration of chest drainage, and postoperative hospital stay were significantly longer in the early-experience group (cases 1 to 26). The intraoperative estimated blood loss was significantly less in the late-experience group (cases 27 to 67). CUSUM charts showed a decreasing duration of chest drainage after the 36th case and shortening postoperative hospital stay after the 37th case. Multidimensional statistical analyses suggested that the learning curve for VATS lobectomy for lung cancer required ∼26 cases. Favorable intraoperative and postoperative care parameters for VATS lobectomy were observed in the late-experience group.
NASA Astrophysics Data System (ADS)
Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan
2018-04-01
In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.
Vitamin D Requirements for the Future-Lessons Learned and Charting a Path Forward.
Cashman, Kevin D
2018-04-25
Estimates of dietary requirements for vitamin D or Dietary Reference Values (DRV) are crucial from a public health perspective in providing a framework for prevention of vitamin D deficiency and optimizing vitamin D status of individuals. While these important public health policy instruments were developed with the evidence-base and data available at the time, there are some issues that need to be clarified or considered in future iterations of DRV for vitamin D. This is important as it will allow for more fine-tuned and truer estimates of the dietary requirements for vitamin D and thus provide for more population protection. The present review will overview some of the confusion that has arisen in relation to the application and/or interpretation of the definitions of the Estimated Average Requirement (EAR) and Recommended Dietary Allowance (RDA). It will also highlight some of the clarifications needed and, in particular, how utilization of a new approach in terms of using individual participant-level data (IPD), over and beyond aggregated data, from randomised controlled trials with vitamin D may have a key role in generating these more fine-tuned and truer estimates, which is of importance as we move towards the next iteration of vitamin D DRVs.
Direct determination approach for the multifractal detrending moving average analysis
NASA Astrophysics Data System (ADS)
Xu, Hai-Chuan; Gu, Gao-Feng; Zhou, Wei-Xing
2017-11-01
In the canonical framework, we propose an alternative approach for the multifractal analysis based on the detrending moving average method (MF-DMA). We define a canonical measure such that the multifractal mass exponent τ (q ) is related to the partition function and the multifractal spectrum f (α ) can be directly determined. The performances of the direct determination approach and the traditional approach of the MF-DMA are compared based on three synthetic multifractal and monofractal measures generated from the one-dimensional p -model, the two-dimensional p -model, and the fractional Brownian motions. We find that both approaches have comparable performances to unveil the fractal and multifractal nature. In other words, without loss of accuracy, the multifractal spectrum f (α ) can be directly determined using the new approach with less computation cost. We also apply the new MF-DMA approach to the volatility time series of stock prices and confirm the presence of multifractality.
Driving-forces model on individual behavior in scenarios considering moving threat agents
NASA Astrophysics Data System (ADS)
Li, Shuying; Zhuang, Jun; Shen, Shifei; Wang, Jia
2017-09-01
The individual behavior model is a contributory factor to improve the accuracy of agent-based simulation in different scenarios. However, few studies have considered moving threat agents, which often occur in terrorist attacks caused by attackers with close-range weapons (e.g., sword, stick). At the same time, many existing behavior models lack validation from cases or experiments. This paper builds a new individual behavior model based on seven behavioral hypotheses. The driving-forces model is an extension of the classical social force model considering scenarios including moving threat agents. An experiment was conducted to validate the key components of the model. Then the model is compared with an advanced Elliptical Specification II social force model, by calculating the fitting errors between the simulated and experimental trajectories, and being applied to simulate a specific circumstance. Our results show that the driving-forces model reduced the fitting error by an average of 33.9% and the standard deviation by an average of 44.5%, which indicates the accuracy and stability of the model in the studied situation. The new driving-forces model could be used to simulate individual behavior when analyzing the risk of specific scenarios using agent-based simulation methods, such as risk analysis of close-range terrorist attacks in public places.
Statistical estimation via convex optimization for trending and performance monitoring
NASA Astrophysics Data System (ADS)
Samar, Sikandar
This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.
Relations between Precipitation and Shallow Groundwater in Illinois.
NASA Astrophysics Data System (ADS)
Changnon, Stanley A.; Huff, Floyd A.; Hsu, Chin-Fei
1988-12-01
The statistical relationships between monthly precipitation (P) and shallow groundwater levels (GW) in 20 wells scattered across Illinois with data for 1960-84 were defined using autoregressive integrated moving average (ARIMA) modeling. A lag of 1 month between P to GW was the strongest temporal relationship found across Illinois, followed by no (0) lag in the northern two-thirds of Illinois where mollisols predominate, and a lag of 2 months in the alfisols of southern Illinois. Spatial comparison of the 20 P-GW correlations with several physical conditions (aquifer types, soils, and physiography) revealed that the parent soil materials of outwash alluvium, glacial till, thick loess (2.1 m), and thin loess (>2.1) best defined regional relationships for drought assessment.Equations developed from ARTMA using 1960-79 data for each region were used to estimate GW levels during the 1980-81 drought, and estimates averaged between 25 to 45 cm of actual levels. These estimates are considered adequate to allow a useful assessment of drought onset, severity, and termination in other parts of the state. The techniques and equations should be transferrable to regions of comparable soils and climate.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Adult survival of Black-legged Kittiwakes Rissa tridactyla in a Pacific colony
Hatch, Scott A.; Roberts, Bay D.; Fadely, Brian S.
1993-01-01
Breeding Black-legged Kittiwakes Rissa tridactyla survived at a mean annual rate of 0.926 in four years at a colony in Alaska. Survival rates observed in sexed males (0.930) and females (0.937) did not differ significantly. The rate of return among nonbreeding Kittiwakes (0.839) was lower than that of known breeders, presumably because more nonbreeders moved away from the study plots where they were marked. Individual nonbreeders frequented sites up to 5 km apart on the same island, while a few established breeders moved up to 2.5 km between years. Mate retention in breeding Kittiwakes averaged 69% in three years. Among pairs that split, the cause of changing mates was about equally divided between death (46%) and divorce (54%). Average adult life expectancy was estimated at 13.0 years. Combined with annual productivity averaging 0.17 chick per nest, the observed survival was insufficient for maintaining population size. Rather, an irregular decline observed in the study colony since 1981 is consistent with the model of a closed population with little or no recruitment. Compared to their Atlantic counterparts, Pacific Kittiwakes have low productivity and high survival. The question arises whether differences reflect phenotypic plasticity or genetically determined variation in population parameters.
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
Peak Running Intensity of International Rugby: Implications for Training Prescription.
Delaney, Jace A; Thornton, Heidi R; Pryor, John F; Stewart, Andrew M; Dascombe, Ben J; Duthie, Grant M
2017-09-01
To quantify the duration and position-specific peak running intensities of international rugby union for the prescription and monitoring of specific training methodologies. Global positioning systems (GPS) were used to assess the activity profile of 67 elite-level rugby union players from 2 nations across 33 international matches. A moving-average approach was used to identify the peak relative distance (m/min), average acceleration/deceleration (AveAcc; m/s 2 ), and average metabolic power (P met ) for a range of durations (1-10 min). Differences between positions and durations were described using a magnitude-based network. Peak running intensity increased as the length of the moving average decreased. There were likely small to moderate increases in relative distance and AveAcc for outside backs, halfbacks, and loose forwards compared with the tight 5 group across all moving-average durations (effect size [ES] = 0.27-1.00). P met demands were at least likely greater for outside backs and halfbacks than for the tight 5 (ES = 0.86-0.99). Halfbacks demonstrated the greatest relative distance and P met outputs but were similar to outside backs and loose forwards in AveAcc demands. The current study has presented a framework to describe the peak running intensities achieved during international rugby competition by position, which are considerably higher than previously reported whole-period averages. These data provide further knowledge of the peak activity profiles of international rugby competition, and this information can be used to assist coaches and practitioners in adequately preparing athletes for the most demanding periods of play.
Zhu, Yu; Xia, Jie-lai; Wang, Jing
2009-09-01
Application of the 'single auto regressive integrated moving average (ARIMA) model' and the 'ARIMA-generalized regression neural network (GRNN) combination model' in the research of the incidence of scarlet fever. Establish the auto regressive integrated moving average model based on the data of the monthly incidence on scarlet fever of one city, from 2000 to 2006. The fitting values of the ARIMA model was used as input of the GRNN, and the actual values were used as output of the GRNN. After training the GRNN, the effect of the single ARIMA model and the ARIMA-GRNN combination model was then compared. The mean error rate (MER) of the single ARIMA model and the ARIMA-GRNN combination model were 31.6%, 28.7% respectively and the determination coefficient (R(2)) of the two models were 0.801, 0.872 respectively. The fitting efficacy of the ARIMA-GRNN combination model was better than the single ARIMA, which had practical value in the research on time series data such as the incidence of scarlet fever.
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
1975-08-28
favorable to the model. Parameter estimates from this fitting process, carried out in the nature of a "moving-average" throughout the cntilre serces of...34OWOLS Pl %%t4)1 uSSvMS~ USA NIWW 162-7-020 r,.6/WEfg 4/R:0 GAUSS.8:O.5 GAUSS.C:I.O GAUSS.D:2.0 GAtJ$ 360 :24 i ONCHRNHEC SCHOUL if .) 75.2 40.0 20
Measuring multiple spike train synchrony.
Kreuz, Thomas; Chicharro, Daniel; Andrzejak, Ralph G; Haas, Julie S; Abarbanel, Henry D I
2009-10-15
Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals (ISIs). In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter-free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISI-diversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods.
Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders
2017-09-01
Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.
Hernandez, Ivan; Preston, Jesse Lee; Hepler, Justin
2014-01-01
Research on the timescale bias has found that observers perceive more capacity for mind in targets moving at an average speed, relative to slow or fast moving targets. The present research revisited the timescale bias as a type of halo effect, where normal-speed people elicit positive evaluations and abnormal-speed (slow and fast) people elicit negative evaluations. In two studies, participants viewed videos of people walking at a slow, average, or fast speed. We find evidence for a timescale halo effect: people walking at an average-speed were attributed more positive mental traits, but fewer negative mental traits, relative to slow or fast moving people. These effects held across both cognitive and emotional dimensions of mind and were mediated by overall positive/negative ratings of the person. These results suggest that, rather than eliciting greater perceptions of general mind, the timescale bias may reflect a generalized positivity toward average speed people relative to slow or fast moving people. PMID:24421882
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
NASA Astrophysics Data System (ADS)
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-12-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.
NASA Astrophysics Data System (ADS)
Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela
2015-10-01
An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.
NASA Astrophysics Data System (ADS)
Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You
2017-02-01
Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.
Little, Mark P; Tatalovich, Zaria; Linet, Martha S; Fang, Michelle; Kendall, Gerald M; Kimlin, Michael G
2018-06-13
Solar ultraviolet radiation is the primary risk factor for skin cancers and sun-related eye disorders. Estimates of individual ambient ultraviolet irradiance derived from ground-based solar measurements and from satellite measurements have rarely been compared. Using self-reported residential history from 67,189 persons in a nationwide occupational US radiologic technologists cohort, we estimated ambient solar irradiance using data from ground-based meters and noontime satellite measurements. The mean distance-moved from city of longest residence in childhood increased from 137.6 km at ages 13-19 to 870.3 km at ages ≥65, with corresponding increases in absolute latitude-difference moved. At ages 20/40/60/80, the Pearson/Spearman correlation coefficients of ground-based and satellite-derived solar potential ultraviolet exposure, using irradiance and cumulative radiant-exposure metrics, were high (=0.87-0.92). There was also moderate correlation (Pearson/Spearman correlation coefficients=0.51-0.60) between irradiance at birth and at last-known address, for ground-based and satellite data. Satellite-based lifetime estimates of ultraviolet radiation were generally 14-15% lower than ground-based estimates, albeit with substantial uncertainties, possibly because ground-based estimates incorporate fluctuations in cloud and ozone, which are incompletely incorporated in the single noontime satellite-overpass ultraviolet value. If confirmed elsewhere, the findings suggest that ground-based estimates may improve exposure-assessment accuracy and potentially provide new insights into ultraviolet-radiation-disease relationships in epidemiologic studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Feinberg, M; Soler, L; Contenot, S; Verger, P
2011-04-01
According to the European Food Safety Authority (EFSA) guidance related to uncertainties in dietary exposure assessment, exposure assessment based on short-term food-consumption surveys, such as 24-h recalls or 2-day records, tend to overestimate long-term exposure because of the assumption that the dietary pattern will be similar day after day over a lifetime. The aim of this study was to make an assessment of dietary exposure to polychlorinated dibenzodioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs), also called 'dioxins' and 'dioxin-like PCBs', using long-term household purchase and consumption survey data collected by TNS-Secodip. Weekly purchases of the major dioxins and dl-PCB vector products of these contaminants were collected for 328 single-person households, who participated at TNS-Secodip consumption surveys from 2003 to 2005 and who were single-person households in order to estimate better their consumption. These data were combined with average contamination levels of food products. Weekly gross average exposure was estimated at 10.2 pg toxic equivalent (WHO TEQ) kg(-1) bw week(-1) (95% confidence interval [9.6, 10.9]). According to the typical shape of the distribution of individual weekly exposures, it is sensible to fit an exponential law to these data. The mean was therefore 12.1 pg WHO TEQ kg(-1) bw week(-1). This value is higher than the arithmetic mean because it better takes into account inter-individual variability. It was estimated that about 20% of persons in this sample were exceeding the current health-based guidance value mainly due to high consumption of seafood and/or dairy products. Thanks to long survey duration (3 years) and the weekly recording of food consumption, it was possible to demonstrate the actual seasonality of dietary exposure to dioxins and dl-PCBs with a maximum between March and September; similar seasonality is observable for fish consumption. Autoregressive integrated moving average (ARIMA) models were adjusted to the time series and it was demonstrated that the number of times the upper limit of confidence intervals exceeds the provisional tolerable weekly intake (PTWI) is about 15 weeks per year on average. Finally, compared with the results obtained from data collected in the short-term surveys (1 week), this study does not suggest that short-term consumption surveys tend to overestimate the long-term exposure.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Zhai, Zhiqiang; Song, Guohua; Lu, Hongyu; He, Weinan; Yu, Lei
2017-09-01
Vehicle-specific power (VSP) has been found to be highly correlated with vehicle emissions. It is used in many studies on emission modeling such as the MOVES (Motor Vehicle Emissions Simulator) model. The existing studies develop specific VSP distributions (or OpMode distribution in MOVES) for different road types and various average speeds to represent the vehicle operating modes on road. However, it is still not clear if the facility- and speed-specific VSP distributions are consistent temporally and spatially. For instance, is it necessary to update periodically the database of the VSP distributions in the emission model? Are the VSP distributions developed in the city central business district (CBD) area applicable to its suburb area? In this context, this study examined the temporal and spatial consistency of the facility- and speed-specific VSP distributions in Beijing. The VSP distributions in different years and in different areas are developed, based on real-world vehicle activity data. The root mean square error (RMSE) is employed to quantify the difference between the VSP distributions. The maximum differences of the VSP distributions between different years and between different areas are approximately 20% of that between different road types. The analysis of the carbon dioxide (CO 2 ) emission factor indicates that the temporal and spatial differences of the VSP distributions have no significant impact on vehicle emission estimation, with relative error of less than 3%. The temporal and spatial differences have no significant impact on the development of the facility- and speed-specific VSP distributions for the vehicle emission estimation. The database of the specific VSP distributions in the VSP-based emission models can maintain in terms of time. Thus, it is unnecessary to update the database regularly, and it is reliable to use the history vehicle activity data to forecast the emissions in the future. In one city, the areas with less data can still develop accurate VSP distributions based on better data from other areas.
An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation
Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin
2014-01-01
In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035
Automatic load forecasting. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, D.J.; Vemuri, S.
A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less
Examination of the Armagh Observatory Annual Mean Temperature Record, 1844-2004
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
The long-term annual mean temperature record (1844-2004) of the Armagh Observatory (Armagh, Northern Ireland, United Kingdom) is examined for evidence of systematic variation, in particular, as related to solar/geomagnetic forcing and secular variation. Indeed, both are apparent in the temperature record. Moving averages for 10 years of temperature are found to highly correlate against both 10-year moving averages of the aa-geomagnetic index and sunspot number, having correlation coefficients of approx. 0.7, inferring that nearly half the variance in the 10-year moving average of temperature can be explained by solar/geomagnetic forcing. The residuals appear episodic in nature, with cooling seen in the 1880s and again near 1980. Seven of the last 10 years of the temperature record has exceeded 10 C, unprecedented in the overall record. Variation of sunspot cyclic averages and 2-cycle moving averages of temperature strongly associate with similar averages for the solar/geomagnetic cycle, with the residuals displaying an apparent 9-cycle variation and a steep rise in temperature associated with cycle 23. Hale cycle averages of temperature for even-odd pairs of sunspot cycles correlate against similar averages for the solar/geomagnetic cycle and, especially, against the length of the Hale cycle. Indications are that annual mean temperature will likely exceed 10 C over the next decade.
A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
Lin, Qing; Han, Youngjoon
2014-01-01
A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility. PMID:25302812
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Ye, Yu; Kerr, William C
2011-01-01
To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.
Moran, John L; Solomon, Patricia J
2011-02-01
Time series analysis has seen limited application in the biomedical Literature. The utility of conventional and advanced time series estimators was explored for intensive care unit (ICU) outcome series. Monthly mean time series, 1993-2006, for hospital mortality, severity-of-illness score (APACHE III), ventilation fraction and patient type (medical and surgical), were generated from the Australia and New Zealand Intensive Care Society adult patient database. Analyses encompassed geographical seasonal mortality patterns, series structural time changes, mortality series volatility using autoregressive moving average and Generalized Autoregressive Conditional Heteroscedasticity models in which predicted variances are updated adaptively, and bivariate and multivariate (vector error correction models) cointegrating relationships between series. The mortality series exhibited marked seasonality, declining mortality trend and substantial autocorrelation beyond 24 lags. Mortality increased in winter months (July-August); the medical series featured annual cycling, whereas the surgical demonstrated long and short (3-4 months) cycling. Series structural breaks were apparent in January 1995 and December 2002. The covariance stationary first-differenced mortality series was consistent with a seasonal autoregressive moving average process; the observed conditional-variance volatility (1993-1995) and residual Autoregressive Conditional Heteroscedasticity effects entailed a Generalized Autoregressive Conditional Heteroscedasticity model, preferred by information criterion and mean model forecast performance. Bivariate cointegration, indicating long-term equilibrium relationships, was established between mortality and severity-of-illness scores at the database level and for categories of ICUs. Multivariate cointegration was demonstrated for {log APACHE III score, log ICU length of stay, ICU mortality and ventilation fraction}. A system approach to understanding series time-dependence may be established using conventional and advanced econometric time series estimators. © 2010 Blackwell Publishing Ltd.
Digging behaviors of radio-tagged black-footed ferrets near Meeteetse, Wyoming, 1981-1984
Biggins, Dean E.; Hanebury, Louis R.; Fagerstone, Kathleen A.
2012-01-01
Intensive radio-tracking during August–December enabled us to collect detailed information on digging behaviors of a small sample of black-footed ferrets (Mustela nigripes) occupying colonies of white-tailed prairie dogs (Cynomys leucurus). A sample of 33 prairie dogs, also radio-tagged, progressively ceased aboveground activity during late summer and fall, presumably as they descended into burrows to hibernate. Most of the time ferrets spent digging was in November–December when >95% of the radio-tagged prairie dogs were inactive, suggesting that digging was primarily to excavate hibernating prey. Although 43.9% of the burrow openings were estimated to be in large mounds, which are common on colonies of white-tailed prairie dogs, all of a sample of 17 deposits of soil (diggings) made by ferrets were excavated at small mounds or nonmounded openings. The average duration of 23 nocturnal sessions of digging by ferrets was 112.2 minutes. A digging session consisted of multiple bouts of soil movement typically lasting about 5 min, and sessions were separated by pauses above- or belowground lasting several minutes. Bouts of moving soil from a burrow involved round-trips of 12.5–30.3 s to remove an average of 35 cm3 of soil per trip. These digging bouts are energetically costly for ferrets. One female moved 16.8 kg of soil an estimated 3.3 m during bouts having a cumulative duration of 178 minutes, removing a soil plug estimated to be 178 cm long. Increasing evidence suggests that some behaviors of ferrets and prairie dogs are coevolutionary responses between this highly specialized predator and its prairie dog prey.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Nonlinear filtering properties of detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Tsujimoto, Yutaka
2016-11-01
Detrended fluctuation analysis (DFA) has been widely used for quantifying long-range correlation and fractal scaling behavior. In DFA, to avoid spurious detection of scaling behavior caused by a nonstationary trend embedded in the analyzed time series, a detrending procedure using piecewise least-squares fitting has been applied. However, it has been pointed out that the nonlinear filtering properties involved with detrending may induce instabilities in the scaling exponent estimation. To understand this issue, we investigate the adverse effects of the DFA detrending procedure on the statistical estimation. We show that the detrending procedure using piecewise least-squares fitting results in the nonuniformly weighted estimation of the root-mean-square deviation and that this property could induce an increase in the estimation error. In addition, for comparison purposes, we investigate the performance of a centered detrending moving average analysis with a linear detrending filter and sliding window DFA and show that these methods have better performance than the standard DFA.
Low, Diana H P; Motakis, Efthymios
2013-10-01
Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.
1994-05-01
A Picture Archiving and Communications System (PACS) must be able to support the image rate of the medical treatment facility. In addition the PACS must have adequate working storage and archive storage capacity required. The calculation of the number of images per minute and the capacity of working storage and of archiving storage is discussed. The calculation takes into account the distribution of images over the different size of radiological images, the distribution between inpatient and outpatient, and the distribution over plain film CR images and other modality images. The support of the indirect clinical image load is difficult to estimate and is considered in some detail. The result of the exercise for a particular hospital is an estimate of the average size of the images and exams on the system, of the number of gigabytes of working storage, of the number of images moved per minute, of the size of the archive in gigabytes, and of the number of images that are to be moved by the archive per minute. The types of storage required to support the image rates and the capacity required are discussed.
The detrimental influence of attention on time-to-contact perception.
Baurès, Robin; Balestra, Marianne; Rosito, Maxime; VanRullen, Rufin
2018-04-23
To which extent is attention necessary to estimate the time-to-contact (TTC) of a moving object, that is, determining when the object will reach a specific point? While numerous studies have aimed at determining the visual cues and gaze strategy that allow this estimation, little is known about if and how attention is involved or required in this process. To answer this question, we carried out an experiment in which the participants estimated the TTC of a moving ball, either alone (single-task condition) or concurrently with a Rapid Serial Visual Presentation task embedded within the ball (dual-task condition). The results showed that participants had a better estimation when attention was driven away from the TTC task. This suggests that drawing attention away from the TTC estimation limits cognitive interference, intrusion of knowledge, or expectations that significantly modify the visually-based TTC estimation, and argues in favor of a limited attention to correctly estimate the TTC.
Three methods for estimating a range of vehicular interactions
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana
2018-02-01
We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.
Simulations of moving effect of coastal vegetation on tsunami damping
NASA Astrophysics Data System (ADS)
Tsai, Ching-Piao; Chen, Ying-Chi; Octaviani Sihombing, Tri; Lin, Chang
2017-05-01
A coupled wave-vegetation simulation is presented for the moving effect of the coastal vegetation on tsunami wave height damping. The problem is idealized by solitary wave propagation on a group of emergent cylinders. The numerical model is based on general Reynolds-averaged Navier-Stokes equations with renormalization group turbulent closure model by using volume of fluid technique. The general moving object (GMO) model developed in computational fluid dynamics (CFD) code Flow-3D is applied to simulate the coupled motion of vegetation with wave dynamically. The damping of wave height and the turbulent kinetic energy along moving and stationary cylinders are discussed. The simulated results show that the damping of wave height and the turbulent kinetic energy by the moving cylinders are clearly less than by the stationary cylinders. The result implies that the wave decay by the coastal vegetation may be overestimated if the vegetation was represented as stationary state.
Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.
2015-01-01
The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217
New evidence for the Hawaiian hotspot plume motion since the Eocene
NASA Astrophysics Data System (ADS)
Parés, Josep M.; Moore, Ted C.
2005-09-01
A thick mound of fossiliferous sediments, reflecting high biogenic productivity at the Equator can be used to determine latitudinal motion of the Pacific lithospheric plate. Plate motion estimates based on the latitudinal movement of Equatorial facies are independent of paleomagnetic data and hotspot tracks and thus permit further testing of kinematic models. We have determined the northward motion of the Pacific Plate for the last 53 Myr based on the position of the paleoequator as shown by Equatorial sediment facies. Between 26 and 69 DSDP and ODP Sites sample the past 53 Myr in the tropical Pacific. Based on the mapped patterns of accumulation rates in these sites, we were able not only to determine the position of the paleoequator but also to estimate the Equatorial great circle and hence the relative position of the spin axis since the early Eocene. The northward motion of the Pacific Plate inferred from the change in latitude of dated Hawaiian Chain seamounts relative to the Hawaiian hotspot is consistently higher than that deduced from the analyses of Equatorial sediment facies. Such a difference results from a latitudinal shift of the Hawaiian hotspot during the last 53 Myr. All together, our observations and recent paleomagnetic results from the Detroit, Nintoku and Koko seamounts [J.A. Tarduno, R.A. Duncan, D.W. Scholl, R.D. Cottrell, B., Steinberger, T. Thordarson, B.C. Kerr, C.R. Neal, F.A. Frey, M. Torii, M., C. Carvallo. The Emperor Seamounts: Southward motion of the Hawaiian hotspot plume in Earth's mantle. Science 301 (2003) 1064-1069.] [1] are consistent with a progressive southward motion of the Hawaiian mantle plume since the Late Cretaceous. Our results suggest that the Hawaiian hotspot moved at ˜32 mm/yr to the SE during the past 43 million years and that the Pacific Plate moved ˜12° northward since 53 Ma at an average rate of 25 mm/yr.
Tropical Cyclone Activity in the North Atlantic Basin During the Weather Satellite Era, 1960-2014
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2016-01-01
This Technical Publication (TP) represents an extension of previous work concerning the tropical cyclone activity in the North Atlantic basin during the weather satellite era, 1960-2014, in particular, that of an article published in The Journal of the Alabama Academy of Science. With the launch of the TIROS-1 polar-orbiting satellite in April 1960, a new era of global weather observation and monitoring began. Prior to this, the conditions of the North Atlantic basin were determined only from ship reports, island reports, and long-range aircraft reconnaissance. Consequently, storms that formed far from land, away from shipping lanes, and beyond the reach of aircraft possibly could be missed altogether, thereby leading to an underestimate of the true number of tropical cyclones forming in the basin. Additionally, new analysis techniques have come into use which sometimes has led to the inclusion of one or more storms at the end of a nominal hurricane season that otherwise would not have been included. In this TP, examined are the yearly (or seasonal) and 10-year moving average (10-year moving average) values of the (1) first storm day (FSD), last storm day (LSD), and length of season (LOS); (2) frequencies of tropical cyclones (by class); (3) average peak 1-minute sustained wind speed (
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Dang, Yaoguo; Gu, Rongbao
2013-03-01
We apply the multifractal detrending moving average (MFDMA) to investigate and compare the efficiency and multifractality of 5-min high-frequency China Securities Index 300 (CSI 300). The results show that the CSI 300 market becomes closer to weak-form efficiency after the introduction of CSI 300 future. We find that the CSI 300 is featured by multifractality and there are less complexity and risk after the CSI 300 index future was introduced. With the shuffling, surrogating and removing extreme values procedures, we unveil that extreme events and fat-distribution are the main origin of multifractality. Besides, we discuss the knotting phenomena in multifractality, and find that the scaling range and the irregular fluctuations for large scales in the Fq(s) vs s plot can cause a knot.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
A GIS-based numerical simulation of the March 2014 Oso landslide fluidized motion
NASA Astrophysics Data System (ADS)
Fukuoka, H.; Ogbonnaya, I.; Wang, C.
2014-12-01
Sliding and flowing are the major movement type after slope failures. Landslides occur when slope-froming material moves downhill after failing along a sliding surface. Most debris flows originally occur in the form of rainfall-induced landslides before they move into valley channel. Landslides that mobilize into debris flows usually are characterized by high-speed movement and long run-out distance and may present the greatest risk to human life. The 22 March 2014 Oso landslide is a typical case of landside transformint to debris flow. The landslide was triggered on the edge of a plateau about 200 m high composed of glacial sediments after excessive prolonged rainfall of 348 in March 2014. After its initiation, portions of the landslide materials transitioned into a rapidly moving debris flow which traveled long distances across the downslope floodplain. U.S. Geological Survey estimated the volume of the slide to be about 7 million m3, and it traveled about 1 km from the toe of the slope. The apparent friction angle measured by the energy line drawn from the crown of the head scarp to the toe of the deposits which reached largest distance, was only 5~6 degrees. we performed two numerical modeling to predicting the runout distance and to get insight into the behaviour of the landslide movement. One is GIS-based revised Hovland's 3D limit equilibrium model which is used to simulate the movement and stoppage of a landslide. In this research, sliding is defined by a slip surface which cuts through the slope, causing the mass of earth to move above it. The factor of safety will be calculated step by step during the sliding process simulation. Stoppage is defined by the factor of safety much greater than one and the velocity equal zero. The other is GIS-based depth-averaged 2D numerical model using a coupled viscous and Coulomb type law to simulate a debris flow from initiation to deposition. We compared our simulaiton results with the results of preliminary computer simulation of the Oso landslide movement which was produced by David L. George and Richard M. Iverson on April 10, 2014.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
ERIC Educational Resources Information Center
Patel, Reshma; Valenzuela, Ireri
2013-01-01
While postsecondary completion rates are a concern among many student populations across the country, college graduation rates for Latino students, especially Latino male students, are even lower than the national average. Low-income Latino men face many barriers to postsecondary success, including both financial and personal obstacles. This…
Tillman, Fred D.; Gangopadhyay, Subhrendu; Pruitt, Tom
2017-01-01
In evaluating potential impacts of climate change on water resources, water managers seek to understand how future conditions may differ from the recent past. Studies of climate impacts on groundwater recharge often compare simulated recharge from future and historical time periods on an average monthly or overall average annual basis, or compare average recharge from future decades to that from a single recent decade. Baseline historical recharge estimates, which are compared with future conditions, are often from simulations using observed historical climate data. Comparison of average monthly results, average annual results, or even averaging over selected historical decades, may mask the true variability in historical results and lead to misinterpretation of future conditions. Comparison of future recharge results simulated using general circulation model (GCM) climate data to recharge results simulated using actual historical climate data may also result in an incomplete understanding of the likelihood of future changes. In this study, groundwater recharge is estimated in the upper Colorado River basin, USA, using a distributed-parameter soil-water balance groundwater recharge model for the period 1951–2010. Recharge simulations are performed using precipitation, maximum temperature, and minimum temperature data from observed climate data and from 97 CMIP5 (Coupled Model Intercomparison Project, phase 5) projections. Results indicate that average monthly and average annual simulated recharge are similar using observed and GCM climate data. However, 10-year moving-average recharge results show substantial differences between observed and simulated climate data, particularly during period 1970–2000, with much greater variability seen for results using observed climate data.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
The TW Hydrae association: trigonometric parallaxes and kinematic analysis
NASA Astrophysics Data System (ADS)
Ducourant, C.; Teixeira, R.; Galli, P. A. B.; Le Campion, J. F.; Krone-Martins, A.; Zuckerman, B.; Chauvin, G.; Song, I.
2014-03-01
Context. The nearby TW Hydrae association (TWA) is currently a benchmark for the study of the formation and evolution of young low-mass stars, circumstellar disks, and the imaging detection of planetary companions. For these studies, it is crucial to evaluate the distance to group members in order to access their physical properties. Membership of several stars is strongly debated and age estimates vary from one author to another with doubts about coevality. Aims: We revisit the kinematic properties of the TWA in light of new trigonometric parallaxes and proper motions to derive the dynamical age of the association and physical parameters of kinematic members. Methods: Using observations performed with the New Technology Telescope (NTT) from ESO we measured trigonometric parallaxes and proper motions for 13 stars in TWA. Results: With the convergent point method we identify a co-moving group with 31 TWA stars. We deduce kinematic distances for seven members of the moving group that lack trigonometric parallaxes. A traceback strategy is applied to the stellar space motions of a selection of 16 of the co-moving objects with accurate and reliable data yielding a dynamical age for the association of t ≃ 7.5 ± 0.7 Myr. Using our new parallaxes and photometry available in the literature we derive stellar ages and masses from theoretical evolutionary models. Conclusions: With new parallax and proper motion measurements from this work and current astrometric catalogs we provide an improved and accurate database for TWA stars to be used in kinematical analysis. We conclude that the dynamical age obtained via traceback strategy is consistent with previous age estimates for the TWA, and is also compatible with the average ages derived in the present paper from evolutionary models for pre-main-sequence stars. Based on observations performed at the European Southern Observatory, Chile (79.C-0229, 81.C-0143, 82.C-0103, 83.C-0102, 84.C-0014).
Buchalski, M R; Chaverri, G; Vonhof, M J
2014-02-01
For species characterized by philopatry of both sexes, mate selection represents an important behaviour for inbreeding avoidance, yet the implications for gene flow are rarely quantified. Here, we present evidence of male gamete-mediated gene flow resulting from extra-group mating in Spix's disc-winged bat, Thyroptera tricolor, a species which demonstrates all-offspring philopatry. We used microsatellite and capture-recapture data to characterize social group structure and the distribution of mated pairs at two sites in southwestern Costa Rica over four breeding seasons. Relatedness and genetic spatial autocorrelation analyses indicated strong kinship within groups and over short distances (<50 m), resulting from matrilineal group structure and small roosting home ranges (~0.2 ha). Despite high relatedness among-group members, observed inbreeding coefficients were low (FIS = 0.010 and 0.037). Parentage analysis indicated mothers and offspring belonged to the same social group, while fathers belonged to different groups, separated by large distances (~500 m) when compared to roosting home ranges. Simulated random mating indicated mate choice was not based on intermediate levels of relatedness, and mated pairs were less related than adults within social groups on average. Isolation-by-distance (IBD) models of genetic neighbourhood area based on father-offspring distances provided direct estimates of mean gamete dispersal distances (r^) > 10 roosting home range equivalents. Indirect estimates based on genetic distance provided even larger estimates of r^, indicating direct estimates were biased low. These results suggest extra-group mating reduces the incidence of inbreeding in T. tricolor, and male gamete dispersal facilitates gene flow in lieu of natal dispersal of young. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Ishikawa, Tetsuo; Yasumura, Seiji; Ohtsuru, Akira; Sakai, Akira; Akahane, Keiichi; Yonai, Shunsuke; Sakata, Ritsu; Ozasa, Kotaro; Hayashi, Masayuki; Ohira, Tetsuya; Kamiya, Kenji; Abe, Masafumi
2016-06-01
Many studies have been conducted on radiation doses to residents after the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident. Time spent outdoors is an influential factor for external dose estimation. Since little information was available on actual time spent outdoors for residents, different values of average time spent outdoors per day have been used in dose estimation studies on the FDNPP accident. The most conservative value of 24 h was sometimes used, while 2.4 h was adopted for indoor workers in the UNSCEAR 2013 report. Fukushima Medical University has been estimating individual external doses received by residents as a part of the Fukushima Health Management Survey by collecting information on the records of moves and activities (the Basic Survey) after the accident from each resident. In the present study, these records were analyzed to estimate an average time spent outdoors per day. As an example, in Iitate Village, its arithmetic mean was 2.08 h (95% CI: 1.64-2.51) for a total of 170 persons selected from respondents to the Basic Survey. This is a much smaller value than commonly assumed. When 2.08 h is used for the external dose estimation, the dose is about 25% (23-26% when using the above 95% CI) less compared with the dose estimated for the commonly used value of 8 h.
Yi, Dong-Hoon; Lee, Tae-Jae; Cho, Dong-Il Dan
2015-05-13
This paper introduces a novel afocal optical flow sensor (OFS) system for odometry estimation in indoor robotic navigation. The OFS used in computer optical mouse has been adopted for mobile robots because it is not affected by wheel slippage. Vertical height variance is thought to be a dominant factor in systematic error when estimating moving distances in mobile robots driving on uneven surfaces. We propose an approach to mitigate this error by using an afocal (infinite effective focal length) system. We conducted experiments in a linear guide on carpet and three other materials with varying sensor heights from 30 to 50 mm and a moving distance of 80 cm. The same experiments were repeated 10 times. For the proposed afocal OFS module, a 1 mm change in sensor height induces a 0.1% systematic error; for comparison, the error for a conventional fixed-focal-length OFS module is 14.7%. Finally, the proposed afocal OFS module was installed on a mobile robot and tested 10 times on a carpet for distances of 1 m. The average distance estimation error and standard deviation are 0.02% and 17.6%, respectively, whereas those for a conventional OFS module are 4.09% and 25.7%, respectively.
On the statistical and transport properties of a non-dissipative Fermi-Ulam model
NASA Astrophysics Data System (ADS)
Livorati, André L. P.; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.
2015-10-01
The transport and diffusion properties for the velocity of a Fermi-Ulam model were characterized using the decay rate of the survival probability. The system consists of an ensemble of non-interacting particles confined to move along and experience elastic collisions with two infinitely heavy walls. One is fixed, working as a returning mechanism of the colliding particles, while the other one moves periodically in time. The diffusion equation is solved, and the diffusion coefficient is numerically estimated by means of the averaged square velocity. Our results show remarkably good agreement of the theory and simulation for the chaotic sea below the first elliptic island in the phase space. From the decay rates of the survival probability, we obtained transport properties that can be extended to other nonlinear mappings, as well to billiard problems.
Elevational ranges of birds on a tropical montane gradient lag behind warming temperatures.
Forero-Medina, German; Terborgh, John; Socolar, S Jacob; Pimm, Stuart L
2011-01-01
Species may respond to a warming climate by moving to higher latitudes or elevations. Shifts in geographic ranges are common responses in temperate regions. For the tropics, latitudinal temperature gradients are shallow; the only escape for species may be to move to higher elevations. There are few data to suggest that they do. Yet, the greatest loss of species from climate disruption may be for tropical montane species. We repeat a historical transect in Peru and find an average upward shift of 49 m for 55 bird species over a 41 year interval. This shift is significantly upward, but also significantly smaller than the 152 m one expects from warming in the region. To estimate the expected shift in elevation we first determined the magnitude of warming in the locality from historical data. Then we used the temperature lapse rate to infer the required shift in altitude to compensate for warming. The range shifts in elevation were similar across different trophic guilds. Endothermy may provide birds with some flexibility to temperature changes and allow them to move less than expected. Instead of being directly dependent on temperature, birds may be responding to gradual changes in the nature of the habitat or availability of food resources, and presence of competitors. If so, this has important implications for estimates of mountaintop extinctions from climate change.
A novel algorithm for Bluetooth ECG.
Pandya, Utpal T; Desai, Uday B
2012-11-01
In wireless transmission of ECG, data latency will be significant when battery power level and data transmission distance are not maintained. In applications like home monitoring or personalized care, to overcome the joint effect of previous issues of wireless transmission and other ECG measurement noises, a novel filtering strategy is required. Here, a novel algorithm, identified as peak rejection adaptive sampling modified moving average (PRASMMA) algorithm for wireless ECG is introduced. This algorithm first removes error in bit pattern of received data if occurred in wireless transmission and then removes baseline drift. Afterward, a modified moving average is implemented except in the region of each QRS complexes. The algorithm also sets its filtering parameters according to different sampling rate selected for acquisition of signals. To demonstrate the work, a prototyped Bluetooth-based ECG module is used to capture ECG with different sampling rate and in different position of patient. This module transmits ECG wirelessly to Bluetooth-enabled devices where the PRASMMA algorithm is applied on captured ECG. The performance of PRASMMA algorithm is compared with moving average and S-Golay algorithms visually as well as numerically. The results show that the PRASMMA algorithm can significantly improve the ECG reconstruction by efficiently removing the noise and its use can be extended to any parameters where peaks are importance for diagnostic purpose.
Wu, Yan; Aarts, Ronald M.
2018-01-01
A recurring problem regarding the use of conventional comb filter approaches for elimination of periodic waveforms is the degree of selectivity achieved by the filtering process. Some applications, such as the gradient artefact correction in EEG recordings during coregistered EEG-fMRI, require a highly selective comb filtering that provides effective attenuation in the stopbands and gain close to unity in the pass-bands. In this paper, we present a novel comb filtering implementation whereby the iterative filtering application of FIR moving average-based approaches is exploited in order to enhance the comb filtering selectivity. Our results indicate that the proposed approach can be used to effectively approximate the FIR moving average filter characteristics to those of an ideal filter. A cascaded implementation using the proposed approach shows to further increase the attenuation in the filter stopbands. Moreover, broadening of the bandwidth of the comb filtering stopbands around −3 dB according to the fundamental frequency of the stopband can be achieved by the novel method, which constitutes an important characteristic to account for broadening of the harmonic gradient artefact spectral lines. In parallel, the proposed filtering implementation can also be used to design a novel notch filtering approach with enhanced selectivity as well. PMID:29599955
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ono, Tomohiro; Miyabe, Yuki, E-mail: miyabe@kuhp.kyoto-u.ac.jp; Yamada, Masahiro
Purpose: The Vero4DRT system has the capability for dynamic tumor-tracking (DTT) stereotactic irradiation using a unique gimbaled x-ray head. The purposes of this study were to develop DTT conformal arc irradiation and to estimate its geometric and dosimetric accuracy. Methods: The gimbaled x-ray head, supported on an O-ring gantry, was moved in the pan and tilt directions during O-ring gantry rotation. To evaluate the mechanical accuracy, the gimbaled x-ray head was moved during the gantry rotating according to input command signals without a target tracking, and a machine log analysis was performed. The difference between a command and a measuredmore » position was calculated as mechanical error. To evaluate beam-positioning accuracy, a moving phantom, which had a steel ball fixed at the center, was driven based on a sinusoidal wave (amplitude [A]: 20 mm, time period [T]: 4 s), a patient breathing motion with a regular pattern (A: 16 mm, average T: 4.5 s), and an irregular pattern (A: 7.2–23.0 mm, T: 2.3–10.0 s), and irradiated with DTT during gantry rotation. The beam-positioning error was evaluated as the difference between the centroid position of the irradiated field and the steel ball on images from an electronic portal imaging device. For dosimetric accuracy, dose distributions in static and moving targets were evaluated with DTT conformal arc irradiation. Results: The root mean squares (RMSs) of the mechanical error were up to 0.11 mm for pan motion and up to 0.14 mm for tilt motion. The RMSs of the beam-positioning error were within 0.23 mm for each pattern. The dose distribution in a moving phantom with tracking arc irradiation was in good agreement with that in static conditions. Conclusions: The gimbal positional accuracy was not degraded by gantry motion. As in the case of a fixed port, the Vero4DRT system showed adequate accuracy of DTT conformal arc irradiation.« less
Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro
2017-01-01
Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.
STREAMFLOW LOSSES IN THE SANTA CRUZ RIVER, ARIZONA.
Aldridge, B.N.
1985-01-01
The discharge and volume of flow in a peak decrease as the peak moves through an 89-mile (143 km) reach of the Santa Cruz River. An average of three peaks per year flow the length of the reach. Of 17,500 acre-ft (21,600 dam**3) that entered the upstream end of the reach, 2300 acre-ft (2,840 dam**3), 13 percent of the inflow, left the reach as streamflow. The remainder was lost through infiltration. Losses in a reach of channel were estimated by relating losses to the discharge at the upstream end of the reach. Tributary inflow was estimated through the use of synthesized duration curves. Streamflow losses along mountain fronts were estimated through the use of an electric analog model and by relating losses shown by the model to the median altitude of the contributing area.
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Forecasting the mortality rates of Malaysian population using Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd
2017-08-01
Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.
NASA Astrophysics Data System (ADS)
Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis
2010-05-01
Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and 2. The long-time behavior of the msd of the centroid walk scales linearly with time for naïve groups (diffusion), but shows a sharp transition to quadratic scaling (advection) for informed ones. These observations suggest that the mesoscopic variables of interest are the magnitude of the drift, the diffusion coefficient and the time-scales at which the anomalous and the asymptotic behavior respectively dominate transport, the latter being linked to the time scale at which the group reaches a decision. In order to estimate these summary statistics from the msd, we assumed that the configuration centroid follows an uncoupled Continuous Time Random Walk (CTRW) with smooth jump and waiting time pdf's. The mesoscopic transport equation for this type of random walk corresponds to an Advection-Diffusion Equation with Memory (ADEM). The introduction of the memory, and thus non-Markovian effects, is necessary in order to correctly account for the two time scales present. Although we were not able to calculate the memory directly from the individual-level rules, we show that it can estimated from a single, relatively short, simulation run using a Mittag-Leffler function as template. With this function it is possible to predict accurately the behavior of the msd, as well as the full pdf for the position of the centroid. The resulting ADEM is self-consistent in the sense that transport parameters estimated from the memory via a Kubo relationship coincide with those estimated from the moments of the jump size pdf of the associated CTRW for a large number of group sizes, proportions of informed individuals, and degrees of bias along the preferred direction. We also discuss the phase diagrams for the transport coefficients estimated from this method, where we notice velocity-precision trade-offs, where precision is a measure of the deviation of realized group orientations with respect to the informed direction. We also note that the time scale to collective decision is invariant with respect to group size, and depends only on the proportion of informed individuals and the strength of the coupling along the informed direction.
Population size, survival, and movements of white-cheeked pintails in Eastern Puerto Rico
Collazo, J.A.; Bonilla-Martinez, G.
2001-01-01
We estimated numbers and survival of White-cheeked Pintails (Anas bahamensis) in eastern Puerto Rico during 1996-1999. We also quantified their movements between Culebra Island and the Humacao Wildlife Refuge, Puerto Rico. Mark-resight population size estimates averaged 1020 pintails during nine, 3-month sampling periods from January 1997 to June 1999. On average, minimum regional counts were 38 % lower than mark-resight estimates (mean = 631). Adult survival was 0.51 ?? 0.09 (SE). This estimate is similar for other anatids of similar size but broader geographic distribution. The probability of pintails surviving and staying in Humacao was hiher (67 %) than for counterparts on Culebra (31 %). The probability of surviving and moving from Culebra to Humacao (41 %) was higher than from Humacao to Culebra (20 %). These findings, and available information on reproduction, indicate that the Humacao Wildlife Refuge refuge has an important role in the regional demography of pintails. Our findings on population numbers and regional survival are encouraging, given concerns about the species' status due to habitat loss and hunting. However, our outlook for the species is tempered by the remaining gaps in the population dynamics of pintails; for examples, survival estimates of broods and fledglings (age 0-1) are needed for a comprehensive status assessment. Until additional data are obtianed, White-cheeked Pintails should continue to be protectd from hunting in Puerto Rico.
Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling
2014-01-01
Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089
Prinos, Scott T.
2017-07-11
The inland extent of saltwater at the base of the Biscayne aquifer in the Model Land Area of Miami-Dade County, Florida, was mapped in 2011. Since that time, the saltwater interface has continued to move inland. The interface is near several active well fields; therefore, an updated approximation of the inland extent of saltwater and an improved understanding of the rate of movement of the saltwater interface are necessary. A geographic information system was used to create a map using the data collected by the organizations that monitor water salinity in this area. An average rate of saltwater interface movement of 140 meters per year was estimated by dividing the distance between two monitoring wells (TPGW-7L and Sec34-MW-02-FS) by the travel time. The travel time was determined by estimating the dates of arrival of the saltwater interface at the wells and computing the difference. This estimate assumes that the interface is traveling east to west between the two monitoring wells. Although monitoring is spatially limited in this area and some of the wells are not ideally designed for salinity monitoring, the monitoring network in this area is improving in spatial distribution and most of the new wells are well designed for salinity monitoring. The approximation of the inland extent of the saltwater interface and the estimated rate of movement of the interface are dependent on existing data. Improved estimates could be obtained by installing uniformly designed monitoring wells in systematic transects extending landward of the advancing saltwater interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, W; Jiang, M; Yin, F
Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, amore » Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results demonstrate that the ASMLP respiratory prediction method is more accurate than MLP method and can improve the respiration forecast accuracy.« less
The Economic Impact of Malignant Catarrhal Fever on Pastoralist Livelihoods
Lankester, Felix; Lugelo, Ahmed; Kazwala, Rudovick; Keyyu, Julius; Cleaveland, Sarah; Yoder, Jonathan
2015-01-01
This study is the first to partially quantify the potential economic benefits that a vaccine, effective at protecting cattle against malignant catarrhal fever (MCF), could accrue to pastoralists living in East Africa. The benefits would result from the removal of household resource and management costs that are traditionally incurred avoiding the disease. MCF, a fatal disease of cattle caused by a virus transmitted from wildebeest calves, has plagued Maasai communities in East Africa for generations. The threat of the disease forces the Maasai to move cattle to less productive grazing areas to avoid wildebeest during calving season when forage quality is critical. To assess the management and resource costs associated with moving, we used household survey data. To estimate the costs associated with changes in livestock body condition that result from being herded away from wildebeest calving grounds, we exploited an ongoing MCF vaccine field trial and we used a hedonic price regression, a statistical model that allows estimation of the marginal contribution of a good’s attributes to its market price. We found that 90 percent of households move, on average, 82 percent of all cattle away from home to avoid MCF. In doing so, a herd’s productive contributions to the household was reduced, with 64 percent of milk being unavailable for sale or consumption by the family members remaining at the boma (the children, women, and the elderly). In contrast cattle that remained on the wildebeest calving grounds during the calving season (and survived MCF) remained fully productive to the family and gained body condition compared to cattle that moved away. This gain was, however, short-lived. We estimated the market value of these condition gains and losses using hedonic regression. The value of a vaccine for MCF is the removal of the costs incurred in avoiding the disease. PMID:25629896
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2016-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.
Unsteady Aerodynamic Force Sensing from Strain Data
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2017-01-01
A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.
Finite-Dimensional Representations for Controlled Diffusions with Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federico, Salvatore, E-mail: salvatore.federico@unimi.it; Tankov, Peter, E-mail: tankov@math.univ-paris-diderot.fr
2015-02-15
We study stochastic delay differential equations (SDDE) where the coefficients depend on the moving averages of the state process. As a first contribution, we provide sufficient conditions under which the solution of the SDDE and a linear path functional of it admit a finite-dimensional Markovian representation. As a second contribution, we show how approximate finite-dimensional Markovian representations may be constructed when these conditions are not satisfied, and provide an estimate of the error corresponding to these approximations. These results are applied to optimal control and optimal stopping problems for stochastic systems with delay.
Hasselback, Leah; Crawford, Jessica; Chaluco, Timoteo; Rajagopal, Sharanya; Prosser, Wendy; Watson, Noel
2014-08-02
Malaria rapid diagnostic tests (RDTs) are particularly useful in low-resource settings where follow-through on traditional laboratory diagnosis is challenging or lacking. The availability of these tests depends on supply chain processes within the distribution system. In Mozambique, stock-outs of malaria RDTs are fairly common at health facilities. A longitudinal cross-sectional study was conducted to evaluate drivers of stock shortages in the Cabo Delgado province. Data were collected from purposively sampled health facilities, using monthly cross-sectional surveys between October 2011 and May 2012. Estimates of lost consumption (consumption not met due to stock-outs) served as the primary quantitative indicator of stock shortages. This is a better measure of the magnitude of stock-outs than binary indicators that only measure frequency of stock-outs at a given facility. Using a case study based methodology, distribution system characteristics were qualitatively analysed to examine causes of stock-outs at the provincial, district and health centre levels. 15 health facilities were surveyed over 120 time points. Stock-out patterns varied by data source; average monthly proportions of 59%, 17% and 17% of health centres reported a stock-out on stock cards, laboratory and pharmacy forms, respectively. Estimates of lost consumption percentage were significantly high; ranging from 0% to 149%; with a weighted average of 78%. Each ten-unit increase in monthly-observed consumption was associated with a nine-unit increase in lost consumption percentage indicating that higher rates of stock-outs occurred at higher levels of observed consumption. Causes of stock-outs included inaccurate tracking of lost consumption, insufficient sophistication in inventory management and replenishment, and poor process compliance by facility workers, all arguably stemming from inadequate attention to the design and implementation of the distribution system. Substantially high levels of RDT stock-outs were found in Cabo Delgado. Study findings point to a supply chain with a commendable degree of sophistication. However, insufficient attention paid to system design and implementation resulted in deteriorating performance in areas of increased need. In such settings fast moving commodities like malaria RDTs can call attention to supply chain vulnerabilities, the findings from which can be used to address other slower moving health commodities.
2014-01-01
Background Malaria rapid diagnostic tests (RDTs) are particularly useful in low-resource settings where follow-through on traditional laboratory diagnosis is challenging or lacking. The availability of these tests depends on supply chain processes within the distribution system. In Mozambique, stock-outs of malaria RDTs are fairly common at health facilities. A longitudinal cross-sectional study was conducted to evaluate drivers of stock shortages in the Cabo Delgado province. Methods Data were collected from purposively sampled health facilities, using monthly cross-sectional surveys between October 2011 and May 2012. Estimates of lost consumption (consumption not met due to stock-outs) served as the primary quantitative indicator of stock shortages. This is a better measure of the magnitude of stock-outs than binary indicators that only measure frequency of stock-outs at a given facility. Using a case study based methodology, distribution system characteristics were qualitatively analysed to examine causes of stock-outs at the provincial, district and health centre levels. Results 15 health facilities were surveyed over 120 time points. Stock-out patterns varied by data source; average monthly proportions of 59%, 17% and 17% of health centres reported a stock-out on stock cards, laboratory and pharmacy forms, respectively. Estimates of lost consumption percentage were significantly high; ranging from 0% to 149%; with a weighted average of 78%. Each ten-unit increase in monthly-observed consumption was associated with a nine-unit increase in lost consumption percentage indicating that higher rates of stock-outs occurred at higher levels of observed consumption. Causes of stock-outs included inaccurate tracking of lost consumption, insufficient sophistication in inventory management and replenishment, and poor process compliance by facility workers, all arguably stemming from inadequate attention to the design and implementation of the distribution system. Conclusions Substantially high levels of RDT stock-outs were found in Cabo Delgado. Study findings point to a supply chain with a commendable degree of sophistication. However, insufficient attention paid to system design and implementation resulted in deteriorating performance in areas of increased need. In such settings fast moving commodities like malaria RDTs can call attention to supply chain vulnerabilities, the findings from which can be used to address other slower moving health commodities. PMID:25086645
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coruh, M; Ewell, L; Demez, N
Purpose: To estimate the dose delivered to a moving lung tumor by proton therapy beams of different modulation types, and compare with Monte Carlo predictions. Methods: A radiology support devices (RSD) phantom was irradiated with therapeutic proton radiation beams using two different types of modulation: uniform scanning (US) and double scattered (DS). The Eclipse© dose plan was designed to deliver 1.00Gy to the isocenter of a static ∼3×3×3cm (27cc) tumor in the phantom with 100% coverage. The peak to peak amplitude of tumor motion varied from 0.0 to 2.5cm. The radiation dose was measured with an ion-chamber (CC-13) located withinmore » the tumor. The time required to deliver the radiation dose varied from an average of 65s for the DS beams to an average of 95s for the US beams. Results: The amount of radiation dose varied from 100% (both US and DS) to the static tumor down to approximately 92% for the moving tumor. The ratio of US dose to DS dose ranged from approximately 1.01 for the static tumor, down to 0.99 for the 2.5cm moving tumor. A Monte Carlo simulation using TOPAS included a lung tumor with 4.0cm of peak to peak motion. In this simulation, the dose received by the tumor varied by ∼40% as the period of this motion varied from 1s to 4s. Conclusion: The radiation dose deposited to a moving tumor was less than for a static tumor, as expected. At large (2.5cm) amplitudes, the DS proton beams gave a dose closer to the desired dose than the US beams, but equal within experimental uncertainty. TOPAS Monte Carlo simulation can give insight into the moving tumor — dose relationship. This work was supported in part by the Philips corporation.« less
3-D in vitro estimation of temperature using the change in backscattered ultrasonic energy.
Arthur, R Martin; Basu, Debomita; Guo, Yuzheng; Trobaugh, Jason W; Moros, Eduardo G
2010-08-01
Temperature imaging with a non-invasive modality to monitor the heating of tumors during hyperthermia treatment is an attractive alternative to sparse invasive measurement. Previously, we predicted monotonic changes in backscattered energy (CBE) of ultrasound with temperature for certain sub-wavelength scatterers. We also measured CBE values similar to our predictions in bovine liver, turkey breast muscle, and pork rib muscle in 2-D in vitro studies and in nude mice during 2-D in vivo studies. To extend these studies to three dimensions, we compensated for motion and measured CBE in turkey breast muscle. 3-D data sets were assembled from images formed by a phased-array imager with a 7.5-MHz linear probe moved in 0.6-mm steps in elevation during uniform heating from 37 to 45 degrees C in 0.5 degrees C increments. We used cross-correlation as a similarity measure in RF signals to automatically track feature displacement as a function of temperature. Feature displacement was non-rigid. Envelopes of image regions, compensated for non-rigid motion, were found with the Hilbert transform then smoothed with a 3 x 3 running average filter before forming the backscattered energy at each pixel. CBE in 3-D motion-compensated images was nearly linear with an average sensitivity of 0.30 dB/ degrees C. 3-D estimation of temperature in separate tissue regions had errors with a maximum standard deviation of about 0.5 degrees C over 1-cm(3) volumes. Success of CBE temperature estimation based on 3-D non-rigid tracking and compensation for real and apparent motion of image features could serve as the foundation for the eventual generation of 3-D temperature maps in soft tissue in a non-invasive, convenient, and low-cost way in clinical hyperthermia.
Li, Guang; Wei, Jie; Huang, Hailiang; Gaebler, Carl Philipp; Yuan, Amy; Deasy, Joseph O
2015-12-01
To automatically estimate average diaphragm motion trajectory (ADMT) based on four-dimensional computed tomography (4DCT), facilitating clinical assessment of respiratory motion and motion variation and retrospective motion study. We have developed an effective motion extraction approach and a machine-learning-based algorithm to estimate the ADMT. Eleven patients with 22 sets of 4DCT images (4DCT1 at simulation and 4DCT2 at treatment) were studied. After automatically segmenting the lungs, the differential volume-per-slice (dVPS) curves of the left and right lungs were calculated as a function of slice number for each phase with respective to the full-exhalation. After 5-slice moving average was performed, the discrete cosine transform (DCT) was applied to analyze the dVPS curves in frequency domain. The dimensionality of the spectrum data was reduced by using several lowest frequency coefficients ( f v ) to account for most of the spectrum energy (Σ f v 2 ). Multiple linear regression (MLR) method was then applied to determine the weights of these frequencies by fitting the ground truth-the measured ADMT, which are represented by three pivot points of the diaphragm on each side. The 'leave-one-out' cross validation method was employed to analyze the statistical performance of the prediction results in three image sets: 4DCT1, 4DCT2, and 4DCT1 + 4DCT2. Seven lowest frequencies in DCT domain were found to be sufficient to approximate the patient dVPS curves ( R = 91%-96% in MLR fitting). The mean error in the predicted ADMT using leave-one-out method was 0.3 ± 1.9 mm for the left-side diaphragm and 0.0 ± 1.4 mm for the right-side diaphragm. The prediction error is lower in 4DCT2 than 4DCT1, and is the lowest in 4DCT1 and 4DCT2 combined. This frequency-analysis-based machine learning technique was employed to predict the ADMT automatically with an acceptable error (0.2 ± 1.6 mm). This volumetric approach is not affected by the presence of the lung tumors, providing an automatic robust tool to evaluate diaphragm motion.
How social influence can undermine the wisdom of crowd effect.
Lorenz, Jan; Rauhut, Heiko; Schweitzer, Frank; Helbing, Dirk
2011-05-31
Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially "wise," knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The "social influence effect" diminishes the diversity of the crowd without improvements of its collective error. The "range reduction effect" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The "confidence effect" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.
Variation in leader length of bitterbrush
Richard L. Hubbard; David. Dunaway
1958-01-01
The estimation of herbage production and· utilization in browse plants has been a problem for many years. Most range technicians have simply estimated the average length of twigs or leaders. then expressed use by deer and livestock as a percentage thereof based on the estimated average length left after grazing. Riordan used this method on mountain mahogany (
Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang
2018-03-27
Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.
Kumar, M Kishore; Sreekanth, V; Salmon, Maëlle; Tonne, Cathryn; Marshall, Julian D
2018-08-01
This study uses spatiotemporal patterns in ambient concentrations to infer the contribution of regional versus local sources. We collected 12 months of monitoring data for outdoor fine particulate matter (PM 2.5 ) in rural southern India. Rural India includes more than one-tenth of the global population and annually accounts for around half a million air pollution deaths, yet little is known about the relative contribution of local sources to outdoor air pollution. We measured 1-min averaged outdoor PM 2.5 concentrations during June 2015-May 2016 in three villages, which varied in population size, socioeconomic status, and type and usage of domestic fuel. The daily geometric-mean PM 2.5 concentration was ∼30 μg m -3 (geometric standard deviation: ∼1.5). Concentrations exceeded the Indian National Ambient Air Quality standards (60 μg m -3 ) during 2-5% of observation days. Average concentrations were ∼25 μg m -3 higher during winter than during monsoon and ∼8 μg m -3 higher during morning hours than the diurnal average. A moving average subtraction method based on 1-min average PM 2.5 concentrations indicated that local contributions (e.g., nearby biomass combustion, brick kilns) were greater in the most populated village, and that overall the majority of ambient PM 2.5 in our study was regional, implying that local air pollution control strategies alone may have limited influence on local ambient concentrations. We compared the relatively new moving average subtraction method against a more established approach. Both methods broadly agree on the relative contribution of local sources across the three sites. The moving average subtraction method has broad applicability across locations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Arsyad, Muhammad; Ihsan, Nasrul; Tiwow, Vistarani Arini
2016-02-01
Maros karst region, covering an area of 43.750 hectares, has water resources that determine the life around it. Water resources in Maros karst are in the rock layers or river underground in the cave. The data used in this study are primary and secondary data. Primary data includes characteristics of the medium. Secondary data is rainfall data from BMKG, water discharge data from the PSDA, South Sulawesi province in 1990-2010, and the other characteristics data Maros karst, namely cave, flora and fauna of the Bantimurung Bulusaraung National Park. Data analysis was conducted using laboratory test for medium characteristics Maros karst, rainfall and water discharge were analyzed using Minitab Program 1.5 to determine their profile. The average rainfall above 200 mm per year occurs in the range of 1999 to 2005. The availability of the water discharge at over 50 m3/s was happened in 1993 and 1995. Prediction was done by modeling Autoregressive Integrated Moving Average (ARIMA), with the rainfall data shows that the average precipitation for four years (2011-2014) will sharply fluctuate. The prediction of water discharge in Maros karst region was done for the period from January to August in 2011, including the type of 0. In 2012, the addition of the water discharge started up in early 2014.
Photo-z-SQL: Photometric redshift estimation framework
NASA Astrophysics Data System (ADS)
Beck, Róbert; Dobos, László; Budavári, Tamás; Szalay, Alexander S.; Csabai, István
2017-04-01
Photo-z-SQL is a flexible template-based photometric redshift estimation framework that can be seamlessly integrated into a SQL database (or DB) server and executed on demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and uses the computational capabilities of DB hardware. Photo-z-SQL performs both maximum likelihood and Bayesian estimation and handles inputs of variable photometric filter sets and corresponding broad-band magnitudes.
Nelms, David L.; Messinger, Terence; McCoy, Kurt J.
2015-07-14
As part of the U.S. Geological Survey’s Groundwater Resources Program study of the Appalachian Plateaus aquifers, annual and average estimates of water-budget components based on hydrograph separation and precipitation data from parameter-elevation regressions on independent slopes model (PRISM) were determined at 849 continuous-record streamflow-gaging stations from Mississippi to New York and covered the period of 1900 to 2011. Only complete calendar years (January to December) of streamflow record at each gage were used to determine estimates of base flow, which is that part of streamflow attributed to groundwater discharge; such estimates can serve as a proxy for annual recharge. For each year, estimates of annual base flow, runoff, and base-flow index were determined using computer programs—PART, HYSEP, and BFI—that have automated the separation procedures. These streamflow-hydrograph analysis methods are provided with version 1.0 of the U.S. Geological Survey Groundwater Toolbox, which is a new program that provides graphing, mapping, and analysis capabilities in a Windows environment. Annual values of precipitation were estimated by calculating the average of cell values intercepted by basin boundaries where previously defined in the GAGES–II dataset. Estimates of annual evapotranspiration were then calculated from the difference between precipitation and streamflow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achord, Stephen; Sandford, Benjamin P.; Hockersmith, Eric E.
2009-05-26
This report provides results from an ongoing project to monitor the migration behavior and survival of wild juvenile spring/summer Chinook salmon in the Snake River Basin. Data reported is from detections of PIT tagged fish during late summer 2007 through mid-2008. Fish were tagged in summer 2007 by the National Marine Fisheries Service (NMFS) in Idaho and by the Oregon Department of Fish and Wildlife (ODFW) in Oregon. Our analyses include migration behavior and estimated survival of fish at instream PIT-tag monitors and arrival timing and estimated survival to Lower Granite Dam. Principal results from tagging and interrogation during 2007-2008more » are listed below: (1) In July and August 2007, we PIT tagged and released 7,390 wild Chinook salmon parr in 12 Idaho streams or sample areas. (2) Overall observed mortality from collection, handling, tagging, and after a 24-hour holding period was 1.4%. (3) Of the 2,524 Chinook salmon parr PIT tagged and released in Valley Creek in summer 2007, 218 (8.6%) were detected at two instream PIT-tag monitoring systems in lower Valley Creek from late summer 2007 to the following spring 2008. Of these, 71.6% were detected in late summer/fall, 11.9% in winter, and 16.5% in spring. Estimated parr-to-smolt survival to Lower Granite Dam was 15.5% for the late summer/fall group, 48.0% for the winter group, and 58.5% for the spring group. Based on detections at downstream dams, the overall efficiency of VC1 (upper) or VC2 (lower) Valley Creek monitors for detecting these fish was 21.1%. Using this VC1 or VC2 efficiency, an estimated 40.8% of all summer-tagged parr survived to move out of Valley Creek, and their estimated survival from that point to Lower Granite Dam was 26.5%. Overall estimated parr-to-smolt survival for all summer-tagged parr from this stream at the dam was 12.1%. Development and improvement of instream PIT-tag monitoring systems continued throughout 2007 and 2008. (4) Testing of PIT-tag antennas in lower Big Creek during 2007-2008 showed these antennas (and anchoring method) are not adequate to withstand high spring flows in this drainage. Future plans involve removing these antennas before high spring flows. (5) At Little Goose Dam in 2008, length and/or weight were taken on 505 recaptured fish from 12 Idaho stream populations. Fish had grown an average of 40.1 mm in length and 10.6 g in weight over an average of 288 d. Their mean condition factor declined from 1.25 at release (parr) to 1.05 at recapture (smolt). (6) Mean release lengths for detected fish were significantly larger than for fish not detected the following spring and summer (P < 0.0001). (7) Fish that migrated through Lower Granite Dam in April and May were significantly larger at release than fish that migrated after May (P < 0.0001) (only 12 fish migrated after May). (8) In 2008, peak detections at Lower Granite Dam of parr tagged during summer 2007 (from the 12 stream populations in Idaho and 4 streams in Oregon) occurred during moderate flows of 87.5 kcfs on 7 May and high flows of 197.3 kcfs on 20 May. The 10th, 50th, and 90th percentile passage occurred on 30 April, 11 May, and 23 May, respectively. (9) In 2007-2008, estimated parr-to-smolt survival to Lower Granite Dam for Idaho and Oregon streams (combined) averaged 19.4% (range 6.2-38.4% depending on stream of origin). In Idaho streams the estimated parr-to-smolt survival averaged 21.0%. This survival was the second highest since 1993 for Idaho streams. Relative parr densities were lower in 2007 (2.4 parr/100 m{sup 2}) than in all previous years since 2000. In 2008, we observed low-to-moderate flows prior to mid-May and relatively cold weather conditions throughout the spring migration season. These conditions moved half of the fish through Lower Granite Dam prior to mid-May; then high flows moved 50 to 90% of the fish through the dam in only 12 days. Clearly, complex interrelationships of several factors drive the annual migrational timing of the stocks.« less
Tempo Rubato : Animacy Speeds Up Time in the Brain
Carrozzo, Mauro; Moscatelli, Alessandro; Lacquaniti, Francesco
2010-01-01
Background How do we estimate time when watching an action? The idea that events are timed by a centralized clock has recently been called into question in favour of distributed, specialized mechanisms. Here we provide evidence for a critical specialization: animate and inanimate events are separately timed by humans. Methodology/Principal Findings In different experiments, observers were asked to intercept a moving target or to discriminate the duration of a stationary flash while viewing different scenes. Time estimates were systematically shorter in the sessions involving human characters moving in the scene than in those involving inanimate moving characters. Remarkably, the animate/inanimate context also affected randomly intermingled trials which always depicted the same still character. Conclusions/Significance The existence of distinct time bases for animate and inanimate events might be related to the partial segregation of the neural networks processing these two categories of objects, and could enhance our ability to predict critically timed actions. PMID:21206749
Rhodes, G; Yoshikawa, S; Clark, A; Lee, K; McKay, R; Akamatsu, S
2001-01-01
Averageness and symmetry are attractive in Western faces and are good candidates for biologically based standards of beauty. A hallmark of such standards is that they are shared across cultures. We examined whether facial averageness and symmetry are attractive in non-Western cultures. Increasing the averageness of individual faces, by warping those faces towards an averaged composite of the same race and sex, increased the attractiveness of both Chinese (experiment 1) and Japanese (experiment 2) faces, for Chinese and Japanese participants, respectively. Decreasing averageness by moving the faces away from an average shape decreased attractiveness. We also manipulated the symmetry of Japanese faces by blending each original face with its mirror image to create perfectly symmetric versions. Japanese raters preferred the perfectly symmetric versions to the original faces (experiment 2). These findings show that preferences for facial averageness and symmetry are not restricted to Western cultures, consistent with the view that they are biologically based. Interestingly, it made little difference whether averageness was manipulated by using own-race or other-race averaged composites and there was no preference for own-race averaged composites over other-race or mixed-race composites (experiment 1). We discuss the implications of these results for understanding what makes average faces attractive. We also discuss some limitations of our studies, and consider other lines of converging evidence that may help determine whether preferences for average and symmetric faces are biologically based.
The Mathematical Analysis of Style: A Correlation-Based Approach.
ERIC Educational Resources Information Center
Oppenheim, Rosa
1988-01-01
Examines mathematical models of style analysis, focusing on the pattern in which literary characteristics occur. Describes an autoregressive integrated moving average model (ARIMA) for predicting sentence length in different works by the same author and comparable works by different authors. This technique is valuable in characterizing stylistic…
An ensemble forecast of the South China Sea monsoon
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.
1999-05-01
This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice
2017-01-01
Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.
ERIC Educational Resources Information Center
Epstein, Diana; Miller, Raegen T.
2011-01-01
In August 2010 the "Los Angeles Times" published a special report on their website featuring performance ratings for nearly 6,000 Los Angeles Unified School District teachers. The move was controversial because the ratings were based on so-called value-added estimates of teachers' contributions to student learning. As with most…
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
Tallon, Lindsay A; Manjourides, Justin; Pun, Vivian C; Mittleman, Murray A; Kioumourtzoglou, Marianthi-Anna; Coull, Brent; Suh, Helen
2017-02-17
Little is known about the association between air pollution and erectile dysfunction (ED), a disorder occurring in 64% of men over the age of 70, and to date, no studies have been published. To address this significant knowledge gap, we explored the relationship between ED and air pollution in a group of older men who were part of the National Social Life, Health, and Aging Project (NSHAP), a nationally representative cohort study of older Americans. We obtained incident ED status and participant data for 412 men (age 57-85). Fine particulate matter (PM 2.5 ) exposures were estimated using spatio-temporal models based on participants' geocoded addresses, while nitrogen dioxide (NO 2 ) and ozone (O 3 ) concentrations were estimated using nearest measurements from the Environmental Protection Agency's Air Quality System. The association between air pollution and incident ED (newly developed in Wave 2) was examined and logistic regression models were run with adjusted models controlling for race, education, season, smoking, obesity, diabetes, depression, and median household income of census tract. We found positive, although statistically insignificant, associations between PM 2.5 , NO 2 , and O 3 exposures and odds of incident ED for each of our examined exposure windows, including 1 to 7 year moving averages. Odds ratios (OR) for 1 and 7 year moving averages equaled 1.16 (95% CI: 0.87, 1.55) and 1.16 (95% CI: 0.92, 1.46), respectively, for an IQR increase in PM 2.5 exposures. Observed associations were robust to model specifications and were not significantly modified by any of the examined risk factors for ED. We found associations between PM 2.5 , NO 2 , and O 3 exposures and odds of developing ED that did not reach nominal statistical significance, although exposures to each pollutant were consistently associated with higher odds of developing ED. While more research is needed, our findings suggest a relationship between air pollutant exposure and incident cases of ED, a common condition in older men.
Variations in magma supply rate at Kilauea Volcano, Hawaii
Dvorak, John J.; Dzurisin, Daniel
1993-01-01
When an eruption of Kilauea lasts more than 4 months, so that a well-defined conduit has time to develop, magma moves freely through the volcano from a deep source to the eruptive site at a constant rate of 0.09 km3/yr. At other times, the magma supply rate to Kilauea, estimated from geodetic measurements of surface displacements, may be different. For example, after a large withdrawal of magma from the summit reservoir, such as during a rift zone eruption, the magma supply rate is high initially but then lessens and exponentially decays as the reservoir refills. Different episodes of refilling may have different average rates of magma supply. During four year-long episodes in the 1960s, the annual rate of refilling varied from 0.02 to 0.18 km3/yr, bracketing the sustained eruptive rate of 0.09 km3/yr. For decade-long or longer periods, our estimate of magma supply rate is based on long-term changes in eruptive rate. We use eruptive rate because after a few dozen eruptions the volume of magma that passes through the summit reservoir is much larger than the net change of volume of magma stored within Kilauea. The low eruptive rate of 0.009 km3/yr between 1840 and 1950, compared to an average eruptive rate of 0.05 km3/yr since 1950, suggests that the magma supply rate was lower between 1840 and 1950 than it has been since 1950. An obvious difference in activity before and since 1950 was the frequency of rift zone eruptions: eight rift zone eruptions occurred between 1840 and 1950, but more than 20 rift zone eruptions have occurred since 1950. The frequency of rift zone eruptions influences magma supply rate by suddenly lowering pressure of the summit magma reservoir, which feeds magma to rift zone eruptions. A temporary drop of reservoir pressure means a larger-than-normal pressure difference between the reservoir and a deeper source, so magma is forced to move upward into Kilauea at a faster rate.
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
Korenromp, Eline L; Mahiané, Guy; Rowley, Jane; Nagelkerke, Nico; Abu-Raddad, Laith; Ndowa, Francis; El-Kettani, Amina; El-Rhilani, Houssine; Mayaud, Philippe; Chico, R Matthew; Pretorius, Carel; Hecht, Kendall; Wi, Teodora
2017-01-01
Objective To develop a tool for estimating national trends in adult prevalence of sexually transmitted infections by low- and middle-income countries, using standardised, routinely collected programme indicator data. Methods The Spectrum-STI model fits time trends in the prevalence of active syphilis through logistic regression on prevalence data from antenatal clinic-based surveys, routine antenatal screening and general population surveys where available, weighting data by their national coverage and representativeness. Gonorrhoea prevalence was fitted as a moving average on population surveys (from the country, neighbouring countries and historic regional estimates), with trends informed additionally by urethral discharge case reports, where these were considered to have reasonably stable completeness. Prevalence data were adjusted for diagnostic test performance, high-risk populations not sampled, urban/rural and male/female prevalence ratios, using WHO's assumptions from latest global and regional-level estimations. Uncertainty intervals were obtained by bootstrap resampling. Results Estimated syphilis prevalence (in men and women) declined from 1.9% (95% CI 1.1% to 3.4%) in 2000 to 1.5% (1.3% to 1.8%) in 2016 in Zimbabwe, and from 1.5% (0.76% to 1.9%) to 0.55% (0.30% to 0.93%) in Morocco. At these time points, gonorrhoea estimates for women aged 15–49 years were 2.5% (95% CI 1.1% to 4.6%) and 3.8% (1.8% to 6.7%) in Zimbabwe; and 0.6% (0.3% to 1.1%) and 0.36% (0.1% to 1.0%) in Morocco, with male gonorrhoea prevalences 14% lower than female prevalence. Conclusions This epidemiological framework facilitates data review, validation and strategic analysis, prioritisation of data collection needs and surveillance strengthening by national experts. We estimated ongoing syphilis declines in both Zimbabwe and Morocco. For gonorrhoea, time trends were less certain, lacking recent population-based surveys. PMID:28325771
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H
2004-01-01
Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
Disruption of State Estimation in the Human Lateral Cerebellum
Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James
2007-01-01
The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990
The Subjective Well-Being Method of Valuation: An Application to General Health Status.
Brown, Timothy T
2015-12-01
To introduce the subjective well-being (SWB) method of valuation and provide an example by valuing health status. The SWB method allows monetary valuations to be performed in the absence of market relationships. Data are from the 1975-2010 General Social Survey. The value of health status is determined via the estimation of an implicit derivative based on a happiness equation. Two-stage least-squares was used to estimate happiness as a function of poor-to-fair health status, annual household income adjusted for household size, age, sex, race, marital status, education, year, and season. Poor-to-fair health status and annual household income are instrumented using a proxy for intelligence, a temporal version of the classic distance instrument, and the average health status of individuals who are demographically similar but geographically separated. Instrument validity is evaluated. Moving from good/excellent health to poor/fair health (1 year of lower health status) is equivalent to the loss of $41,654 of equivalized household income (2010 constant dollars) per annum, which is larger than median equivalized household income. The SWB method may be useful in making monetary valuations where fundamental market relationships are not present. © Health Research and Educational Trust.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-08
...: Centers for Medicare & Medicaid Services (CMS), HHS. ACTION: Proposed rule. SUMMARY: This proposed rule..., especially the teaching status adjustment factor. Therefore, we implemented a 3-year moving average approach... moving average to calculate the facility-level adjustment factors. For FY 2011, we issued a notice to...
Aging and the Visual Perception of Motion Direction: Solving the Aperture Problem.
Shain, Lindsey M; Norman, J Farley
2018-07-01
An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred-The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers' precision markedly deteriorated as the number of apertures was reduced from 64 to 9.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo
2012-01-01
The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626
Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992 - 2009
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Giovinetto, Mario B.
2011-01-01
Published mass balance estimates for the Antarctic Ice Sheet (AIS) lie between approximately +50 to -250 Gt/year for 1992 to 2009, which span a range equivalent to 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar-altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (+28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. Although recent reports of large and accelerating rates of mass loss from GRACE=based studies cite agreement with IOM results, our evaluation does not support that conclusion. We find that the extrapolation used in the published IOM estimates for the 15 % of the periphery for which discharge velocities are not observed gives twice the rate of discharge per unit of associated ice-sheet area than the 85% faster-moving parts. Our calculations show that the published extrapolation overestimates the ice discharge by 282 Gt/yr compared to our assumption that the slower moving areas have 70% as much discharge per area as the faster moving parts. Also, published data on the time-series of discharge velocities and accumulation/precipitation do not support mass output increases or input decreases with time, respectively. Our modified IOM estimate, using the 70% discharge assumption and substituting input from a field-data compilation for input from an atmospheric model over 6% of area, gives a loss of only 13 Gt/year (versus 136 Gt/year) for the period around 2000. Two ERS-based estimates, our modified IOM, and a GRACE-based estimate for observations within 1992 to 2005 lie in a narrowed range of +27 to - 40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992-2001 is - 47 Gt/year for West Antarctica, + 16 Gt/year for East Antarctica, and -31 Gt/year overall (+0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07 % of the AIS area)
76 FR 41828 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
.... Based on conversations with fund representatives, it is estimated that rule 31a- 1 imposes an average... hours. Based on conversations with fund representatives, however, the Commission staff estimates that...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stokoe, Kenneth H.; Li, Song Cheng; Cox, Brady R.
2007-06-06
In this volume (IV), all S-wave measurements are presented that were performed in Borehole C4993 at the Waste Treatment Plant (WTP) with T-Rex as the seismic source and the Lawrence Berkeley National Laboratory (LBNL) 3-D wireline geophone as the at-depth borehole receiver. S-wave measurements were performed over the depth range of 370 to 1300 ft, typically in 10-ft intervals. However, in some interbeds, 5-ft depth intervals were used, while below about 1200 ft, depth intervals of 20 ft were used. Shear (S) waves were generated by moving the base plate of T-Rex for a given number of cycles at amore » fixed frequency as discussed in Section 2. This process was repeated so that signal averaging in the time domain was performed using 3 to about 15 averages, with 5 averages typically used. In addition, a second average shear wave record was recorded by reversing the polarity of the motion of the T-Rex base plate. In this sense, all the signals recorded in the field were averaged signals. In all cases, the base plate was moving perpendicular to a radial line between the base plate and the borehole which is in and out of the plane of the figure shown in Figure 1.1. The definition of “in-line”, “cross-line”, “forward”, and “reversed” directions in items 2 and 3 of Section 2 was based on the moving direction of the base plate. In addition to the LBNL 3-D geophone, called the lower receiver herein, a 3-D geophone from Redpath Geophysics was fixed at a depth of 22 ft in Borehole C4993, and a 3-D geophone from the University of Texas (UT) was embedded near the borehole at about 1.5 ft below the ground surface. The Redpath geophone and the UT geophone were properly aligned so that one of the horizontal components in each geophone was aligned with the direction of horizontal shaking of the T-Rex base plate. This volume is organized into 12 sections as follows. Section 1: Introduction, Section 2: Explanation of Terminology, Section 3: Vs Profile at Borehole C4993, Sections 4 to 6: Unfiltered S-wave records of lower horizontal receiver, reaction mass, and reference receiver, respectively, Sections 7 to 9: Filtered S-wave signals of lower horizontal receiver, reaction mass and reference receiver, respectively, Section 10: Expanded and filtered S-wave signals of lower horizontal receiver, and Sections 11 and 12: Waterfall plots of unfiltered and filtered lower horizontal receiver signals, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Yan, H; Jia, X
2014-06-01
Purpose: A moving blocker based strategy has shown promising results for scatter correction in cone-beam computed tomography (CBCT). Different parameters of the system design affect its performance in scatter estimation and image reconstruction accuracy. The goal of this work is to optimize the geometric design of the moving block system. Methods: In the moving blocker system, a blocker consisting of lead strips is inserted between the x-ray source and imaging object and moving back and forth along rotation axis during CBCT acquisition. CT image of an anthropomorphic pelvic phantom was used in the simulation study. Scatter signal was simulated bymore » Monte Carlo calculation with various combinations of the lead strip width and the gap between neighboring lead strips, ranging from 4 mm to 80 mm (projected at the detector plane). Scatter signal in the unblocked region was estimated by cubic B-spline interpolation from the blocked region. Scatter estimation accuracy was quantified as relative root mean squared error by comparing the interpolated scatter to the Monte Carlo simulated scatter. CBCT was reconstructed by total variation minimization from the unblocked region, under various combinations of the lead strip width and gap. Reconstruction accuracy in each condition is quantified by CT number error as comparing to a CBCT reconstructed from unblocked full projection data. Results: Scatter estimation error varied from 0.5% to 2.6% as the lead strip width and the gap varied from 4mm to 80mm. CT number error in the reconstructed CBCT images varied from 12 to 44. Highest reconstruction accuracy is achieved when the blocker lead strip width is 8 mm and the gap is 48 mm. Conclusions: Accurate scatter estimation can be achieved in large range of combinations of lead strip width and gap. However, image reconstruction accuracy is greatly affected by the geometry design of the blocker.« less
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Estimating Perturbation and Meta-Stability in the Daily Attendance Rates of Six Small High Schools
NASA Astrophysics Data System (ADS)
Koopmans, Matthijs
This paper discusses the daily attendance rates in six small high schools over a ten-year period and evaluates how stable those rates are. “Stability” is approached from two vantage points: pulse models are fitted to estimate the impact of sudden perturbations and their reverberation through the series, and Autoregressive Fractionally Integrated Moving Average (ARFIMA) techniques are used to detect dependencies over the long range of the series. The analyses are meant to (1) exemplify the utility of time series approaches in educational research, which lacks a time series tradition, (2) discuss some time series features that seem to be particular to daily attendance rate trajectories such as the distinct downward pull coming from extreme observations, and (3) present an analytical approach to handle the important yet distinct patterns of variability that can be found in these data. The analysis also illustrates why the assumption of stability that underlies the habitual reporting of weekly, monthly and yearly averages in the educational literature is questionable, as it reveals dynamical processes (perturbation, meta-stability) that remain hidden in such summaries.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network.
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-05-30
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.
The Lateral Tracking Control for the Intelligent Vehicle Based on Adaptive PID Neural Network
Han, Gaining; Fu, Weiping; Wang, Wen; Wu, Zongsheng
2017-01-01
The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control. PMID:28556817
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Parameter interdependence and uncertainty induced by lumping in a hydrologic model
NASA Astrophysics Data System (ADS)
Gallagher, Mark R.; Doherty, John
2007-05-01
Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of "structural noise," which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.
STOCK MARKET CRASH AND EXPECTATIONS OF AMERICAN HOUSEHOLDS*
HUDOMIET, PÉTER; KÉZDI, GÁBOR; WILLIS, ROBERT J.
2011-01-01
SUMMARY This paper utilizes data on subjective probabilities to study the impact of the stock market crash of 2008 on households’ expectations about the returns on the stock market index. We use data from the Health and Retirement Study that was fielded in February 2008 through February 2009. The effect of the crash is identified from the date of the interview, which is shown to be exogenous to previous stock market expectations. We estimate the effect of the crash on the population average of expected returns, the population average of the uncertainty about returns (subjective standard deviation), and the cross-sectional heterogeneity in expected returns (disagreement). We show estimates from simple reduced-form regressions on probability answers as well as from a more structural model that focuses on the parameters of interest and separates survey noise from relevant heterogeneity. We find a temporary increase in the population average of expectations and uncertainty right after the crash. The effect on cross-sectional heterogeneity is more significant and longer lasting, which implies substantial long-term increase in disagreement. The increase in disagreement is larger among the stockholders, the more informed, and those with higher cognitive capacity, and disagreement co-moves with trading volume and volatility in the market. PMID:21547244
Grid occupancy estimation for environment perception based on belief functions and PCR6
NASA Astrophysics Data System (ADS)
Moras, Julien; Dezert, Jean; Pannetier, Benjamin
2015-05-01
In this contribution, we propose to improve the grid map occupancy estimation method developed so far based on belief function modeling and the classical Dempster's rule of combination. Grid map offers a useful representation of the perceived world for mobile robotics navigation. It will play a major role for the security (obstacle avoidance) of next generations of terrestrial vehicles, as well as for future autonomous navigation systems. In a grid map, the occupancy of each cell representing a small piece of the surrounding area of the robot must be estimated at first from sensors measurements (typically LIDAR, or camera), and then it must also be classified into different classes in order to get a complete and precise perception of the dynamic environment where the robot moves. So far, the estimation and the grid map updating have been done using fusion techniques based on the probabilistic framework, or on the classical belief function framework thanks to an inverse model of the sensors. Mainly because the latter offers an interesting management of uncertainties when the quality of available information is low, and when the sources of information appear as conflicting. To improve the performances of the grid map estimation, we propose in this paper to replace Dempster's rule of combination by the PCR6 rule (Proportional Conflict Redistribution rule #6) proposed in DSmT (Dezert-Smarandache) Theory. As an illustrating scenario, we consider a platform moving in dynamic area and we compare our new realistic simulation results (based on a LIDAR sensor) with those obtained by the probabilistic and the classical belief-based approaches.
Dual-filter estimation for rotating-panel sample designs
Francis Roesch
2017-01-01
Dual-filter estimators are described and tested for use in the annual estimation for national forest inventories. The dual-filter approach involves the use of a moving widow estimator in the first pass, which is used as input to Theilâs mixed estimator in the second pass. The moving window and dual-filter estimators are tested along with two other estimators in a...
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
NASA Astrophysics Data System (ADS)
Chen, Feier; Tian, Kang; Ding, Xiaoxu; Miao, Yuqi; Lu, Chunxia
2016-11-01
Analysis of freight rate volatility characteristics attracts more attention after year 2008 due to the effect of credit crunch and slowdown in marine transportation. The multifractal detrended fluctuation analysis technique is employed to analyze the time series of Baltic Dry Bulk Freight Rate Index and the market trend of two bulk ship sizes, namely Capesize and Panamax for the period: March 1st 1999-February 26th 2015. In this paper, the degree of the multifractality with different fluctuation sizes is calculated. Besides, multifractal detrending moving average (MF-DMA) counting technique has been developed to quantify the components of multifractal spectrum with the finite-size effect taken into consideration. Numerical results show that both Capesize and Panamax freight rate index time series are of multifractal nature. The origin of multifractality for the bulk freight rate market series is found mostly due to nonlinear correlation.
Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.
Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih
2016-10-01
In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.
Moving in the Right Direction: Helping Children Cope with a Relocation
ERIC Educational Resources Information Center
Kruse, Tricia
2012-01-01
According to national figures, 37.1 million people moved in 2009 (U.S. Census Bureau, 2010). In fact, the average American will move 11.7 times in their lifetime. Why are Americans moving so much? There are a variety of reasons. Regardless of the reason, moving is a common experience for children. If one looks at the developmental characteristics…
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Force balance on two-dimensional superconductors with a single moving vortex
NASA Astrophysics Data System (ADS)
Chung, Chun Kit; Arahata, Emiko; Kato, Yusuke
2014-03-01
We study forces on two-dimensional superconductors with a single moving vortex based on a recent fully self-consistent calculation of DC conductivity in an s-wave superconductor (E. Arahata and Y. Kato, arXiv:1310.0566). By considering momentum balance of the whole liquid, we attempt to identify various contributions to the total transverse force on the vortex. This provides an estimation of the effective Magnus force based on the quasiclassical theory generalized by Kita [T. Kita, Phys. Rev. B, 64, 054503 (2001)], which allows for the Hall effect in vortex states.
Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...
Low-Cost 3-D Flow Estimation of Blood With Clutter.
Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali
2017-05-01
Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
NASA Astrophysics Data System (ADS)
Mackay, S. L.; Marchant, D. R.
2017-12-01
The McMurdo Dry Valleys (MDV) region of Antarctica is considered to be one of the most geomorphically stable regions on Earth. The extreme landscape stability is attributed primarily to persistent cold-polar desert conditions, and has enabled the multi-million-year preservation of near-surface terrestrial archives that are critical to our understanding of Antarctic ice sheet dynamics and climate change over at least the last 14 Ma. Correct interpretation of these archives requires well-constrained estimates of the rate of landscape alteration and erosion. Previous studies using tephrochronology of in situ ash deposits and terrestrial cosmogenic nuclides from bedrock and regolith on ridge crests, valley bottoms, and other low-angled, sub-horizontal surfaces have yielded inferred erosion rates of 5×10-5 to 9×10-4mm a-1 . However, estimates for erosion of cliff faces in topographically complex terrain that dominates the upland region of the MDV are largely unknown. Here we measure, for the first time in the MDV, the average rate of erosion and headwall-retreat for near-vertical glaciated cirques. To accomplish this, we analyze the sediment flux through the Mullins and Friedman glaciers; these are cold-based, topographically constrained, and slow-moving debris-covered alpine glaciers that collect and transport debris sourced entirely from rockfall at the headwall cirque. Using data from 15 km of ground penetrating radar profiles, 12 shallow ice cores, and 180 shallow surface excavations, we compile an estimated total sediment load for each glacier. We then combine this sediment load with measurements of the debris source area and a glacial chronology based on cosmogenic nuclide dating and measured ice flow velocities. Results indicate average headwall erosion rates of 1×10-3-5×10-3 mm a-1 and slope-adjusted headwall retreat rates of 9×10-4-4×10-3 mm a-1 over the past 225 ka. These values are the lowest yet reported and are several orders of magnitude lower than most headwall retreat rates in temperate, sub-arctic, and arctic mountain regions. Extrapolating this average erosion rate beyond the measured time period implies that less than 100 m of headwall retreat has occurred since the Middle Miocene and supports interpretations of the upland MDV region as a nearly static landscape.
Taylor, Brian A.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason
2009-01-01
The authors investigated the performance of the iterative Steiglitz–McBride (SM) algorithm on an autoregressive moving average (ARMA) model of signals from a fast, sparsely sampled, multiecho, chemical shift imaging (CSI) acquisition using simulation, phantom, ex vivo, and in vivo experiments with a focus on its potential usage in magnetic resonance (MR)-guided interventions. The ARMA signal model facilitated a rapid calculation of the chemical shift, apparent spin-spin relaxation time (T2*), and complex amplitudes of a multipeak system from a limited number of echoes (≤16). Numerical simulations of one- and two-peak systems were used to assess the accuracy and uncertainty in the calculated spectral parameters as a function of acquisition and tissue parameters. The measured uncertainties from simulation were compared to the theoretical Cramer–Rao lower bound (CRLB) for the acquisition. Measurements made in phantoms were used to validate the T2* estimates and to validate uncertainty estimates made from the CRLB. We demonstrated application to real-time MR-guided interventions ex vivo by using the technique to monitor a percutaneous ethanol injection into a bovine liver and in vivo to monitor a laser-induced thermal therapy treatment in a canine brain. Simulation results showed that the chemical shift and amplitude uncertainties reached their respective CRLB at a signal-to-noise ratio (SNR)≥5 for echo train lengths (ETLs)≥4 using a fixed echo spacing of 3.3 ms. T2* estimates from the signal model possessed higher uncertainties but reached the CRLB at larger SNRs and∕or ETLs. Highly accurate estimates for the chemical shift (<0.01 ppm) and amplitude (<1.0%) were obtained with ≥4 echoes and for T2* (<1.0%) with ≥7 echoes. We conclude that, over a reasonable range of SNR, the SM algorithm is a robust estimator of spectral parameters from fast CSI acquisitions that acquire ≤16 echoes for one- and two-peak systems. Preliminary ex vivo and in vivo experiments corroborated the results from simulation experiments and further indicate the potential of this technique for MR-guided interventional procedures with high spatiotemporal resolution ∼1.6×1.6×4 mm3 in ≤5 s. PMID:19378736
Short-Term Exposure to Air Pollution and Biomarkers of Oxidative Stress: The Framingham Heart Study.
Li, Wenyuan; Wilker, Elissa H; Dorans, Kirsten S; Rice, Mary B; Schwartz, Joel; Coull, Brent A; Koutrakis, Petros; Gold, Diane R; Keaney, John F; Lin, Honghuang; Vasan, Ramachandran S; Benjamin, Emelia J; Mittleman, Murray A
2016-04-28
Short-term exposure to elevated air pollution has been associated with higher risk of acute cardiovascular diseases, with systemic oxidative stress induced by air pollution hypothesized as an important underlying mechanism. However, few community-based studies have assessed this association. Two thousand thirty-five Framingham Offspring Cohort participants living within 50 km of the Harvard Boston Supersite who were not current smokers were included. We assessed circulating biomarkers of oxidative stress including blood myeloperoxidase at the seventh examination (1998-2001) and urinary creatinine-indexed 8-epi-prostaglandin F2α (8-epi-PGF2α) at the seventh and eighth (2005-2008) examinations. We measured fine particulate matter (PM2.5), black carbon, sulfate, nitrogen oxides, and ozone at the Supersite and calculated 1-, 2-, 3-, 5-, and 7-day moving averages of each pollutant. Measured myeloperoxidase and 8-epi-PGF2α were loge transformed. We used linear regression models and linear mixed-effects models with random intercepts for myeloperoxidase and indexed 8-epi-PGF2α, respectively. Models were adjusted for demographic variables, individual- and area-level measures of socioeconomic position, clinical and lifestyle factors, weather, and temporal trend. We found positive associations of PM2.5 and black carbon with myeloperoxidase across multiple moving averages. Additionally, 2- to 7-day moving averages of PM2.5 and sulfate were consistently positively associated with 8-epi-PGF2α. Stronger positive associations of black carbon and sulfate with myeloperoxidase were observed among participants with diabetes than in those without. Our community-based investigation supports an association of select markers of ambient air pollution with circulating biomarkers of oxidative stress. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Streets, D. G.; Yarber, K. F.; Woo, J.-H.
Estimates of biomass burning in Asia are developed to facilitate the modeling of Asian and global air quality. A survey of national, regional, and international publications on biomass burning is conducted to yield consensus estimates of 'typical' (i.e., non-year-specific) estimates of open burning (excluding biofuels). We conclude that 730 Tg of biomass are burned in a typical year from both anthropogenic and natural causes. Forest burning comprises 45% of the total, the burning of crop residues in the field comprises 34%, and 20% comes from the burning of grassland and savanna. China contributes 25% of the total, India 18%, Indonesiamore » 13%, and Myanmar 8%. Regionally, forest burning in Southeast Asia dominates. National, annual totals are converted to daily and monthly estimates at 1{sup o} x 1{sup o} spatial resolution using distributions based on AVHRR fire counts for 1999--2000. Several adjustment schemes are applied to correct for the deficiencies of AVHRR data, including the use of moving averages, normalization, TOMS Aerosol Index, and masks for dust, clouds, landcover, and other fire sources. Good agreement between the national estimates of biomass burning and adjusted fire counts is obtained (R{sup 2} = 0.71--0.78). Biomass burning amounts are converted to atmospheric emissions, yielding the following estimates: 0.37 Tg of SO{sub 2}, 2.8 Tg of NO{sub x}, 1100 Tg of CO{sub 2}, 67 Tg of CO, 3.1 Tg of CH{sub 4}, 12 Tg of NMVOC, 0.45 Tg of BC, 3.3 Tg of OC, and 0.92 Tg of NH{sub 3}. Uncertainties in the emission estimates, measured as 95% confidence intervals, range from a low of {+-}65% for CO{sub 2} emissions in Japan to a high of {+-}700% for BC emissions in India.« less
USDA-ARS?s Scientific Manuscript database
One of the primary variables affecting ignition and spread of wildfire is fuel moisture content (FMC), which is the ratio of water mass to dry mass in living and dead plant material. Because dead FMC may be estimated from available weather data, remote sensing is needed to monitor the spatial distr...
Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter
Liu, Xiaozhen; Frey, H. Christopher
2012-01-01
A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000
16 CFR 305.20 - Paper catalogs and Web sites.
Code of Federal Regulations, 2014 CFR
2014-01-01
... based on a [Year] national average electricity cost of [ ___ cents per kWh]. For more information, visit... estimated operating cost is based on a [Year] national average [electricity, natural gas, propane, or oil... washers] and a [Year] national average cost of ___ cents per kWh for electricity and $ ___ per therm for...
A comparison of several techniques for imputing tree level data
David Gartner
2002-01-01
As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...
NASA Astrophysics Data System (ADS)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; Chang, Ping
2017-10-01
We alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ˜8-50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the global average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.
Results of a large-scale randomized behavior change intervention on road safety in Kenya.
Habyarimana, James; Jack, William
2015-08-25
Road accidents kill 1.3 million people each year, most in the developing world. We test the efficacy of evocative messages, delivered on stickers placed inside Kenyan matatus, or minibuses, in reducing road accidents. We randomize the intervention, which nudges passengers to complain to their drivers directly, across 12,000 vehicles and find that on average it reduces insurance claims rates of matatus by between one-quarter and one-third and is associated with 140 fewer road accidents per year than predicted. Messages promoting collective action are especially effective, and evocative images are an important motivator. Average maximum speeds and average moving speeds are 1-2 km/h lower in vehicles assigned to treatment. We cannot reject the null hypothesis of no placebo effect. We were unable to discern any impact of a complementary radio campaign on insurance claims. Finally, the sticker intervention is inexpensive: we estimate the cost-effectiveness of the most impactful stickers to be between $10 and $45 per disability-adjusted life-year saved.
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
Chang, Hsiao-Han; Worby, Colin J.; Yeka, Adoke; Nankabirwa, Joaniter; Kamya, Moses R.; Staedke, Sarah G.; Hubbart, Christina; Amato, Roberto; Kwiatkowski, Dominic P.
2017-01-01
As many malaria-endemic countries move towards elimination of Plasmodium falciparum, the most virulent human malaria parasite, effective tools for monitoring malaria epidemiology are urgent priorities. P. falciparum population genetic approaches offer promising tools for understanding transmission and spread of the disease, but a high prevalence of multi-clone or polygenomic infections can render estimation of even the most basic parameters, such as allele frequencies, challenging. A previous method, COIL, was developed to estimate complexity of infection (COI) from single nucleotide polymorphism (SNP) data, but relies on monogenomic infections to estimate allele frequencies or requires external allele frequency data which may not available. Estimates limited to monogenomic infections may not be representative, however, and when the average COI is high, they can be difficult or impossible to obtain. Therefore, we developed THE REAL McCOIL, Turning HEterozygous SNP data into Robust Estimates of ALelle frequency, via Markov chain Monte Carlo, and Complexity Of Infection using Likelihood, to incorporate polygenomic samples and simultaneously estimate allele frequency and COI. This approach was tested via simulations then applied to SNP data from cross-sectional surveys performed in three Ugandan sites with varying malaria transmission. We show that THE REAL McCOIL consistently outperforms COIL on simulated data, particularly when most infections are polygenomic. Using field data we show that, unlike with COIL, we can distinguish epidemiologically relevant differences in COI between and within these sites. Surprisingly, for example, we estimated high average COI in a peri-urban subregion with lower transmission intensity, suggesting that many of these cases were imported from surrounding regions with higher transmission intensity. THE REAL McCOIL therefore provides a robust tool for understanding the molecular epidemiology of malaria across transmission settings. PMID:28125584
Teixidó, Mercè; Pallejà, Tomàs; Font, Davinia; Tresanchez, Marcel; Moreno, Javier; Palacín, Jordi
2012-11-28
This paper presents the use of an external fixed two-dimensional laser scanner to detect cylindrical targets attached to moving devices, such as a mobile robot. This proposal is based on the detection of circular markers in the raw data provided by the laser scanner by applying an algorithm for outlier avoidance and a least-squares circular fitting. Some experiments have been developed to empirically validate the proposal with different cylindrical targets in order to estimate the location and tracking errors achieved, which are generally less than 20 mm in the area covered by the laser sensor. As a result of the validation experiments, several error maps have been obtained in order to give an estimate of the uncertainty of any location computed. This proposal has been validated with a medium-sized mobile robot with an attached cylindrical target (diameter 200 mm). The trajectory of the mobile robot was estimated with an average location error of less than 15 mm, and the real location error in each individual circular fitting was similar to the error estimated with the obtained error maps. The radial area covered in this validation experiment was up to 10 m, a value that depends on the radius of the cylindrical target and the radial density of the distance range points provided by the laser scanner but this area can be increased by combining the information of additional external laser scanners.
Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G
2018-05-25
Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.
Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin
2017-09-01
In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.
NASA Astrophysics Data System (ADS)
Choudhary, Piyush; Srivastava, Rakesh K.; Nath Mahendra, Som; Motahhir, Saad
2017-08-01
In today’s scenario to combat with climate change effects, there are a lot of reasons why we all should use renewable energy sources instead of fossil fuels. Solar energy is one of the best options based on features like good for the environment, independent of electricity prices, underutilized land, grid security, sustainable growth, etc. This concept paper is oriented primarily focused on the use of Solar Energy for the crude oil heating purpose besides other many prospective industrial applications to reduce cost, carbon footprint and moving towards a sustainable and ecologically friendly Oil & Gas Industry. Concentrated Solar Power technology based prototype system is proposed to substitute the presently used system based on natural gas burning method. The hybrid system which utilizes the solar energy in the oil and gas industry would strengthen the overall field working conditions, safety measures and environmental ecology. 40% reduction on natural gas with this hybrid system is estimated. A positive implication for an environment, working conditions and safety precautions is the additive advantage. There could also decrease air venting of CO2, CH4 and N2O by an average of 30-35%.
Rapid range shifts of species associated with high levels of climate warming.
Chen, I-Ching; Hill, Jane K; Ohlemüller, Ralf; Roy, David B; Thomas, Chris D
2011-08-19
The distributions of many terrestrial organisms are currently shifting in latitude or elevation in response to changing climate. Using a meta-analysis, we estimated that the distributions of species have recently shifted to higher elevations at a median rate of 11.0 meters per decade, and to higher latitudes at a median rate of 16.9 kilometers per decade. These rates are approximately two and three times faster than previously reported. The distances moved by species are greatest in studies showing the highest levels of warming, with average latitudinal shifts being generally sufficient to track temperature changes. However, individual species vary greatly in their rates of change, suggesting that the range shift of each species depends on multiple internal species traits and external drivers of change. Rapid average shifts derive from a wide diversity of responses by individual species.
Muñoz, María; Pong-Wong, Ricardo; Canela-Xandri, Oriol; Rawlik, Konrad; Haley, Chris S; Tenesa, Albert
2016-09-01
Genome-wide association studies have detected many loci underlying susceptibility to disease, but most of the genetic factors that contribute to disease susceptibility remain unknown. Here we provide evidence that part of the 'missing heritability' can be explained by an overestimation of heritability. We estimated the heritability of 12 complex human diseases using family history of disease in 1,555,906 individuals of white ancestry from the UK Biobank. Estimates using simple family-based statistical models were inflated on average by ∼47% when compared with those from structural equation modeling (SEM), which specifically accounted for shared familial environmental factors. In addition, heritabilities estimated using SNP data explained an average of 44.2% of the simple family-based estimates across diseases and an average of 57.3% of the SEM-estimated heritabilities, accounting for almost all of the SEM heritability for hypertension. Our results show that both genetics and familial environment make substantial contributions to familial clustering of disease.
mb Bias and Regional Magnitude and Yield
2008-09-01
established bias at the Nevada Test Site (NTS) relative to Semipalatinsk is well reproduced, which is important for moving forward. To avoid the...variations are averaged out. To monitor individual test sites during the testing era, test site corrections were obtained by various means, most notably...across broad areas where earthquakes occur. The station-based technique retains near- site effects that the event-based technique does not, thus, resolving
Federation of Malaysia. Country profile.
Newcomb, L
1985-01-01
The 1984 population of Malaysia has been estimated at 14.7 million and the population growth rate averaged 2.3% in 1970-80. Population growth is officially encouraged to form a substantial home market for economic development. Toward this end, the 1985 budget has increased tax deductions for families with 5 children. The capital city of Kuala Lumpur is the largest metropolitan area (1 million population) and the Federal Territory is the most densely populated region. Immigration is strictly controlled by the government, and the percentage of foreign-born citizens was 5% in 1980. China, India, and Pakistan are decreasing in importance as countries of origin. Internal mobility, however, is increasing. Rural-rural migration accounted for 45% of internal migration in 1970-80 and was largely motivated by family reasons. Only 7% of Malaysians are estimated to move in search of work. Racial tensions led the government to grant special economic privileges to native-born Islamic Malays. The greatest proportion of the population is centered in the lowest age groups. The percentage of females 15-29 years of age rose from 26% in 1970 to 30% in 1980 and is expected to continue to rise. Fertility is on the decline. The majority of households in the country involve nuclear families. There has been an increase in the number of men and women who delay marriage or remain single. Education is widely available for children aged 6-15 years and those who meet certain academic standards receive free education up to age 19 years. The current labor force is estimated at 5.4 million, with an annual growth rate of 3.1%. Malaysia's per capita income (US $1860 in 1982) is among the highest in Southeast Asia and the gross national product increased by an average annual rate of 8% in 1970-81. The government plans to move toward the development of heavier industries and more manufacturing concerns.
NASA Astrophysics Data System (ADS)
Nagler, P. L.; Nguyen, U.; Bateman, H. L.; Jarchow, C.; van Riper, C., III; Waugh, W.; Glenn, E.
2016-12-01
Northern saltcedar beetles (Diorhabda carinata) have spread widely in riparian zones on the Colorado Plateau since their initial release in 2002. One goal of the releases was to reduce water consumption by saltcedar in order to conserve water through reduction of evapotranspiration (ET). The beetle moved south on the Virgin River and reached Big Bend State Park in Nevada in 2014, an expansion rate of 60 km/year. This is important because the beetle's photoperiod requirement for diapause was expected to prevent them from moving south of 37°N latitude, where endangered southwest willow flycatcher habitat occurs. In addition to focusing on the rate of dispersal of the beetles, we used remote sensing estimates of ET at 13 sites on the Colorado, San Juan, Virgin and Dolores rivers and their tributaries to estimate riparian zone ET before and after beetle releases. We estimate that water savings from 2007-2015 was 31.5 million m3/yr (25,547 acre-ft/yr), amounting to 0.258 % of annual river flow from the Upper Colorado River Basin to the Lower Basin. Reasons for the relatively low potential water savings are: 1) baseline ET before beetle release was modest (0.472 m/yr); 2) reduction in ET was low (0.061 m/yr) because saltcedar stands tended to recover after defoliation; 3) riparian ET even in the absence of beetles was only 1.8 % of river flows, calculated as the before beetle average annual ET (472 mm/yr) times the total area of saltcedar (51,588 ha) divided by the combined total average annual flows (1964-2015) from the upper to lower catchment areas of the Colorado River Basin at the USGS gages (12,215 million m3/yr or 9.90 million acre-ft). Further research is suggested to concentrate on the ecological impacts (both positive and negative) of beetles on riparian zones and on identifying management options to maximize riparian health.
Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.
Durdu, Omer Faruk
2010-10-01
In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic statistics of observed data in terms of mean. The ARIMA modeling approach is recommended for predicting boron concentration series of a river.
Roberts, Steven; Martin, Michael A
2007-06-01
The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.
SIMP J013656.5+093347 Is Likely a Planetary-mass Object in the Carina-Near Moving Group
NASA Astrophysics Data System (ADS)
Gagné, Jonathan; Faherty, Jacqueline K.; Burgasser, Adam J.; Artigau, Étienne; Bouchard, Sandie; Albert, Loïc; Lafrenière, David; Doyon, René; Bardalez Gagliuffi, Daniella C.
2017-05-01
We report on the discovery that the nearby (˜6 pc) photometrically variable T2.5 dwarf SIMP J013656.5+093347 is a likely member of the ˜200 Myr old Carina-Near moving group with a probability of >99.9% based on its full kinematics. Our v\\sin I measurement of 50.9 ± 0.8 km s-1 combined with the known rotation period inferred from variability measurements provide a lower limit of 1.01 ± 0.02 {R}{Jup} on the radius of SIMP 0136+0933, an independent verification that it must be younger than ˜950 Myr, according to evolution models. We estimate a field interloper probability of 0.2% based on the density of field T0-T5 dwarfs. At the age of Carina-Near, SIMP 0136+0933 has an estimated mass of 12.7 ± 1.0 {M}{Jup} and is predicted to have burned roughly half of its original deuterium. SIMP 0136+0933 is the closest known young moving group member to the Sun and is one of only a few known young T dwarfs, making it an important benchmark for understanding the atmospheres of young planetary-mass objects.
NASA Astrophysics Data System (ADS)
Hwang, Jiwon; Choi, Yong-Sang; Kim, WonMoo; Su, Hui; Jiang, Jonathan H.
2018-01-01
The high-latitude climate system contains complicated, but largely veiled physical feedback processes. Climate predictions remain uncertain, especially for the Northern High Latitudes (NHL; north of 60°N), and observational constraint on climate modeling is vital. This study estimates local radiative feedbacks for NHL based on the CERES/Terra satellite observations during March 2000-November 2014. The local shortwave (SW) and longwave (LW) radiative feedback parameters are calculated from linear regression of radiative fluxes at the top of the atmosphere on surface air temperatures. These parameters are estimated by the de-seasonalization and 12-month moving average of the radiative fluxes over NHL. The estimated magnitudes of the SW and the LW radiative feedbacks in NHL are 1.88 ± 0.73 and 2.38 ± 0.59 W m-2 K-1, respectively. The parameters are further decomposed into individual feedback components associated with surface albedo, water vapor, lapse rate, and clouds, as a product of the change in climate variables from ERA-Interim reanalysis estimates and their pre-calculated radiative kernels. The results reveal the significant role of clouds in reducing the surface albedo feedback (1.13 ± 0.44 W m-2 K-1 in the cloud-free condition, and 0.49 ± 0.30 W m-2 K-1 in the all-sky condition), while the lapse rate feedback is predominant in LW radiation (1.33 ± 0.18 W m-2 K-1). However, a large portion of the local SW and LW radiative feedbacks were not simply explained by the sum of these individual feedbacks.
Geohydrology and simulation of ground-water flow in the aquifer system near Calvert City, Kentucky
Starn, J.J.; Arihood, L.D.; Rose, M.F.
1995-01-01
The U.S. Geological Survey, in cooperation with the Kentucky Natural Resources and Environmental Protection Cabinet, constructed a two-dimensional, steady-state ground-water-flow model to estimate hydraulic properties, contributing areas to discharge boundaries, and the average linear velocity at selected locations in an aquifer system near Calvert City, Ky. Nonlinear regression was used to estimate values of model parameters and the reliability of the parameter estimates. The regression minimizes the weighted difference between observed and calculated hydraulic heads and rates of flow. The calibrated model generally was better than alternative models considered, and although adding transmissive faults in the bedrock produced a slightly better model, fault transmissivity was not estimated reliably. The average transmissivity of the aquifer was 20,000 feet squared per day. Recharge to two outcrop areas, the McNairy Formation of Cretaceous age and the alluvium of Quaternary age, were 0.00269 feet per day (11.8 inches per year) and 0.000484 feet per day (2.1 inches per year), respectively. Contributing areas to wells at the Calvert City Water Company in 1992 did not include the Calvert City Industrial Complex. Since completing the fieldwork for this study in 1992, the Calvert City Water Company discontinued use of their wells and began withdrawing water from new wells that were located 4.5 miles east-southeast of the previous location; the contributing area moved farther from the industrial complex. The extent of the alluvium contributing water to wells was limited by the overlying lacustrine deposits. The average linear ground-water velocity at the industrial complex ranged from 0.90 feet per day to 4.47 feet per day with a mean of 1.98 feet per day.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
Groundspeed filtering for CTAS
NASA Technical Reports Server (NTRS)
Slater, Gary L.
1994-01-01
Ground speed is one of the radar observables which is obtained along with position and heading from NASA Ames Center radar. Within the Center TRACON Automation System (CTAS), groundspeed is converted into airspeed using the wind speeds which CTAS obtains from the NOAA weather grid. This airspeed is then used in the trajectory synthesis logic which computes the trajectory for each individual aircraft. The time history of the typical radar groundspeed data is generally quite noisy, with high frequency variations on the order of five knots, and occasional 'outliers' which can be significantly different from the probable true speed. To try to smooth out these speeds and make the ETA estimate less erratic, filtering of the ground speed is done within CTAS. In its base form, the CTAS filter is a 'moving average' filter which averages the last ten radar values. In addition, there is separate logic to detect and correct for 'outliers', and acceleration logic which limits the groundspeed change in adjacent time samples. As will be shown, these additional modifications do cause significant changes in the actual groundspeed filter output. The conclusion is that the current ground speed filter logic is unable to track accurately the speed variations observed on many aircraft. The Kalman filter logic however, appears to be an improvement to the current algorithm used to smooth ground speed variations, while being simpler and more efficient to implement. Additional logic which can test for true 'outliers' can easily be added by looking at the difference in the a priori and post priori Kalman estimates, and not updating if the difference in these quantities is too large.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682
The Association between Air Pollution and Outpatient and Inpatient Visits in Shenzhen, China
Liu, Yachuan; Chen, Shanen; Xu, Jian; Liu, Xiaojian; Wu, Yongsheng; Zhou, Lin; Cheng, Jinquan; Ma, Hanwu; Zheng, Jing; Lin, Denan; Zhang, Li; Chen, Lili
2018-01-01
Nowadays, air pollution is a severe environmental problem in China. To investigate the effects of ambient air pollution on health, a time series analysis of daily outpatient and inpatient visits in 2015 were conducted in Shenzhen (China). Generalized additive model was employed to analyze associations between six air pollutants (namely SO2, CO, NO2, O3, PM10, and PM2.5) and daily outpatient and inpatient visits after adjusting confounding meteorological factors, time and day of the week effects. Significant associations between air pollutants and two types of hospital visits were observed. The estimated increase in overall outpatient visits associated with each 10 µg/m3 increase in air pollutant concentration ranged from 0.48% (O3 at lag 2) to 11.48% (SO2 with 2-day moving average); for overall inpatient visits ranged from 0.73% (O3 at lag 7) to 17.13% (SO2 with 8-day moving average). Our results also suggested a heterogeneity of the health effects across different outcomes and in different populations. The findings in present study indicate that even in Shenzhen, a less polluted area in China, significant associations exist between air pollution and daily number of overall outpatient and inpatient visits. PMID:29360738
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-11-18
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.
Population drinking and fatal injuries in Eastern Europe: a time-series analysis of six countries.
Landberg, Jonas
2010-01-01
To estimate to what extent injury mortality rates in 6 Eastern European countries are affected by changes in population drinking during the post-war period. The analysis included injury mortality rates and per capita alcohol consumption in Russia, Belarus, Poland, Hungary, Bulgaria and the former Czechoslovakia. Total population and gender-specific models were estimated using auto regressive integrated moving average time-series modelling. The estimates for the total population were generally positive and significant. For Russia and Belarus, a 1-litre increase in per capita consumption was associated with an increase in injury mortality of 7.5 and 5.5 per 100,000 inhabitants, respectively. The estimates for the remaining countries ranged between 1.4 and 2.0. The gender-specific estimates displayed national variations similar to the total population estimates although the estimates for males were higher than for females in all countries. The results suggest that changes in per capita consumption have a significant impact on injury mortality in these countries, but the strength of the association tends to be stronger in countries where intoxication-oriented drinking is more common. Copyright 2009 S. Karger AG, Basel.
Sabushimike, Donatien; Na, Seung You; Kim, Jin Young; Bui, Ngoc Nam; Seo, Kyung Sik; Kim, Gil Gyeom
2016-01-01
The detection of a moving target using an IR-UWB Radar involves the core task of separating the waves reflected by the static background and by the moving target. This paper investigates the capacity of the low-rank and sparse matrix decomposition approach to separate the background and the foreground in the trend of UWB Radar-based moving target detection. Robust PCA models are criticized for being batched-data-oriented, which makes them inconvenient in realistic environments where frames need to be processed as they are recorded in real time. In this paper, a novel method based on overlapping-windows processing is proposed to cope with online processing. The method consists of processing a small batch of frames which will be continually updated without changing its size as new frames are captured. We prove that RPCA (via its Inexact Augmented Lagrange Multiplier (IALM) model) can successfully separate the two subspaces, which enhances the accuracy of target detection. The overlapping-windows processing method converges on the optimal solution with its batch counterpart (i.e., processing batched data with RPCA), and both methods prove the robustness and efficiency of the RPCA over the classic PCA and the commonly used exponential averaging method. PMID:27598159
A method for estimating the performance of photovoltaic systems
NASA Astrophysics Data System (ADS)
Clark, D. R.; Klein, S. A.; Beckman, W. A.
A method is presented for predicting the long-term average performance of photovoltaic systems having storage batteries and subject to any diurnal load profile. The monthly-average fraction of the load met by the system is estimated from array parameters and monthly-average meteorological data. The method is based on radiation statistics, and utilizability, and can account for variability in the electrical demand as well as for the variability in solar radiation.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Spectrum-based estimators of the bivariate Hurst exponent
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2014-12-01
We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
A simple method for estimating frequency response corrections for eddy covariance systems
W. J. Massman
2000-01-01
A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
Do alcohol excise taxes affect traffic accidents? Evidence from Estonia.
Saar, Indrek
2015-01-01
This article examines the association between alcohol excise tax rates and alcohol-related traffic accidents in Estonia. Monthly time series of traffic accidents involving drunken motor vehicle drivers from 1998 through 2013 were regressed on real average alcohol excise tax rates while controlling for changes in economic conditions and the traffic environment. Specifically, regression models with autoregressive integrated moving average (ARIMA) errors were estimated in order to deal with serial correlation in residuals. Counterfactual models were also estimated in order to check the robustness of the results, using the level of non-alcohol-related traffic accidents as a dependent variable. A statistically significant (P <.01) strong negative relationship between the real average alcohol excise tax rate and alcohol-related traffic accidents was disclosed under alternative model specifications. For instance, the regression model with ARIMA (0, 1, 1)(0, 1, 1) errors revealed that a 1-unit increase in the tax rate is associated with a 1.6% decrease in the level of accidents per 100,000 population involving drunk motor vehicle drivers. No similar association was found in the cases of counterfactual models for non-alcohol-related traffic accidents. This article indicates that the level of alcohol-related traffic accidents in Estonia has been affected by changes in real average alcohol excise taxes during the period 1998-2013. Therefore, in addition to other measures, the use of alcohol taxation is warranted as a policy instrument in tackling alcohol-related traffic accidents.
Plans, Patterns, and Move Categories Guiding a Highly Selective Search
NASA Astrophysics Data System (ADS)
Trippen, Gerhard
In this paper we present our ideas for an Arimaa-playing program (also called a bot) that uses plans and pattern matching to guide a highly selective search. We restrict move generation to moves in certain move categories to reduce the number of moves considered by the bot significantly. Arimaa is a modern board game that can be played with a standard Chess set. However, the rules of the game are not at all like those of Chess. Furthermore, Arimaa was designed to be as simple and intuitive as possible for humans, yet challenging for computers. While all established Arimaa bots use alpha-beta search with a variety of pruning techniques and other heuristics ending in an extensive positional leaf node evaluation, our new bot, Rat, starts with a positional evaluation of the current position. Based on features found in the current position - supported by pattern matching using a directed position graph - our bot Rat decides which of a given set of plans to follow. The plan then dictates what types of moves can be chosen. This is another major difference from bots that generate "all" possible moves for a particular position. Rat is only allowed to generate moves that belong to certain categories. Leaf nodes are evaluated only by a straightforward material evaluation to help avoid moves that lose material. This highly selective search looks, on average, at only 5 moves out of 5,000 to over 40,000 possible moves in a middle game position.
microclim: Global estimates of hourly microclimate based on long-term monthly climate averages
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms. PMID:25977764
Microclim: Global estimates of hourly microclimate based on long-term monthly climate averages.
Kearney, Michael R; Isaac, Andrew P; Porter, Warren P
2014-01-01
The mechanistic links between climate and the environmental sensitivities of organisms occur through the microclimatic conditions that organisms experience. Here we present a dataset of gridded hourly estimates of typical microclimatic conditions (air temperature, wind speed, relative humidity, solar radiation, sky radiation and substrate temperatures from the surface to 1 m depth) at high resolution (~15 km) for the globe. The estimates are for the middle day of each month, based on long-term average macroclimates, and include six shade levels and three generic substrates (soil, rock and sand) per pixel. These data are suitable for deriving biophysical estimates of the heat, water and activity budgets of terrestrial organisms.
The potential human health risk(s) from exposure to chemicals under conditions for which adequate human or animal data are not available must frequently be assessed. Exposure scenario is particularly important for the acute neurotoxic effects of volatile organic compounds (VOCs)...
Neural Bases of Sequence Processing in Action and Language
ERIC Educational Resources Information Center
Carota, Francesca; Sirigu, Angela
2008-01-01
Real-time estimation of what we will do next is a crucial prerequisite of purposive behavior. During the planning of goal-oriented actions, for instance, the temporal and causal organization of upcoming subsequent moves needs to be predicted based on our knowledge of events. A forward computation of sequential structure is also essential for…
Does Mother Know Best? Treatment Adherence as a Function of Anticipated Treatment Benefit
Glymour, M. Maria; Nguyen, Quynh; Matsouaka, Roland; Tchetgen Tchetgen, Eric J.; Schmidt, Nicole M.; Osypuk, Theresa L.
2016-01-01
Background We describe bias resulting from individualized treatment selection, which occurs when treatment has heterogeneous effects and individuals selectively choose treatments of greatest benefit to themselves. This pernicious bias may confound estimates from observational studies and lead to important misinterpretation of intent-to-treat analyses of randomized trials. Despite the potentially serious threat to inferences, individualized treatment selection has rarely been formally described or assessed. Methods The Moving to Opportunity (MTO) trial randomly assigned subsidized rental vouchers to low-income families in high-poverty public housing. We assessed the Kessler-6 psychological distress and Behavior Problems Index outcomes for 2,829 adolescents 4–7 years after randomization. Among families randomly assigned to receive vouchers, we estimated probability of moving (treatment), predicted by pre-randomization characteristics (c-statistic=0.63). We categorized families into tertiles of this estimated probability of moving, and compared instrumental variable effect estimates for moving on Behavior Problems Index and Kessler-6 across tertiles. Results Instrumental variable estimated effects of moving on behavioral problems index were most adverse for boys least likely to move (b=0.93; 95% CI: 0.33, 1.53) compared to boys most likely to move (b=0.14; 95% CI: −0.15, 0.44; p=.02 for treatment*tertile interaction). Effects on Kessler-6 were more beneficial for girls least likely to move compared to girls most likely to move (−0.62 vs. 0.02; interaction p=.03). Conclusions Evidence of Individualized treatment selection differed by child gender and outcome and should be evaluated in randomized trial reports, especially when heterogeneous treatment effects are likely and non-adherence is common. PMID:26628424
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viner, Brian J.; Jannik, Tim; Stone, Daniel
Firefighters responding to wildland fires where surface litter and vegetation contain radiological contamination will receive a radiological dose by inhaling resuspended radioactive material in the smoke. This may increase their lifetime risk of contracting certain types of cancer. Using published data, we modelled hypothetical radionuclide emissions, dispersion and dose for 70th and 97th percentile environmental conditions and for average and high fuel loads at the Savannah River Site. We predicted downwind concentration and potential dose to firefighters for radionuclides of interest ( 137Cs, 238Pu, 90Sr and 210Po). Predicted concentrations exceeded dose guidelines in the base case scenario emissions of 1.0more » × 10 7 Bq ha –1 for 238Pu at 70th percentile environmental conditions and average fuel load levels for both 4- and 14-h shifts. Under 97th percentile environmental conditions and high fuel loads, dose guidelines were exceeded for several reported cases for 90Sr, 238Pu and 210Po. Potential for exceeding dose guidelines was mitigated by including plume rise (>2 m s –1) or moving a small distance from the fire owing to large concentration gradients near the edge of the fire. As a result, our approach can quickly estimate potential dose from airborne radionuclides in wildland fire and assist decision-making to reduce firefighter exposure.« less
NASA Astrophysics Data System (ADS)
Wu, Yu-Jie; Lin, Guan-Wei
2017-04-01
Since 1999, Taiwan has experienced a rapid rise in the number of landslides, and the number even reached a peak after the 2009 Typhoon Morakot. Although it is proved that the ground-motion signals induced by slope processes could be recorded by seismograph, it is difficult to be distinguished from continuous seismic records due to the lack of distinct P and S waves. In this study, we combine three common seismic detectors including the short-term average/long-term average (STA/LTA) approach, and two diagnostic functions of moving average and scintillation index. Based on these detectors, we have established an auto-detection algorithm of landslide-quakes and the detection thresholds are defined to distinguish landslide-quake from earthquakes and background noises. To further improve the proposed detection algorithm, we apply it to seismic archives recorded by Broadband Array in Taiwan for Seismology (BATS) during the 2009 Typhoon Morakots and consequently the discrete landslide-quakes detected by the automatic algorithm are located. The detection algorithm show that the landslide-detection results are consistent with that of visual inspection and hence can be used to automatically monitor landslide-quakes.
NASA Astrophysics Data System (ADS)
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-11-01
Nowadays, the study on volatility concept especially in stock market has gained so much attention from a group of people engaged in financial and economic sectors. The applications of volatility concept in financial economics can be seen in valuation of option pricing, estimation of financial derivatives, hedging the investment risk and etc. There are various ways to measure the volatility value. However for this study, two methods are used; the simple standard deviation and Exponentially Weighted Moving Average (EWMA). The focus of this study is to measure the volatility on three different sectors of business in Malaysia, called primary, secondary and tertiary by using both methods. The daily and annual volatilities of different business sector based on stock prices for the period of 1 January 2014 to December 2014 have been calculated in this study. Result shows that different patterns of the closing stock prices and return give different volatility values when calculating using simple method and EWMA method.
Linking ‘toxic outliers’ to environmental justice communities
NASA Astrophysics Data System (ADS)
Collins, Mary B.; Munoz, Ian; JaJa, Joseph
2016-01-01
Several key studies have found that a small minority of producers, polluting at levels far exceeding group averages, generate the majority of overall exposure to industrial toxics. Frequently, such patterns go unnoticed and are understudied outside of the academic community. To our knowledge, no research to date has systematically described the scope and extent of extreme variations in industrially based exposure estimates and sought to link inequities in harm produced to inequities in exposure. In an analysis of all permitted industrial facilities across the United States, we show that there exists a class of hyper-polluters—the worst-of-the-worst—that disproportionately expose communities of color and low income populations to chemical releases. This study hopes to move beyond a traditional environmental justice research frame, bringing new computational methods and perspectives aimed at the empirical study of societal power dynamics. Our findings suggest the possibility that substantial environmental gains may be made through selective environmental enforcement, rather than sweeping initiatives.
The Meandering Margin of the Meteorological Moist Tropics
NASA Astrophysics Data System (ADS)
Mapes, Brian E.; Chung, Eui Seok; Hannah, Walter M.; Masunaga, Hirohiko; Wimmers, Anthony J.; Velden, Christopher S.
2018-01-01
Bimodally distributed column water vapor (CWV) indicates a well-defined moist regime in the Tropics, above a margin value near 48 kg m-2 in current climate (about 80% of column saturation). Maps reveal this margin as a meandering, sinuous synoptic contour bounding broad plateaus of the moist regime. Within these plateaus, convective storms of distinctly smaller convective and mesoscales occur sporadically. Satellite data composites across the poleward most margin reveal its sharpness, despite the crude averaging: precipitation doubles within 100 km, marked by both enhancement and deepening of cloudiness. Transported patches and filaments of the moist regime cause consequential precipitation events within and beyond the Tropics. Distinguishing synoptic flows that
Catchment-scale groundwater recharge and vegetation water use efficiency
NASA Astrophysics Data System (ADS)
Troch, P. A. A.; Dwivedi, R.; Liu, T.; Meira, A.; Roy, T.; Valdés-Pineda, R.; Durcik, M.; Arciniega, S.; Brena-Naranjo, J. A.
2017-12-01
Precipitation undergoes a two-step partitioning when it falls on the land surface. At the land surface and in the shallow subsurface, rainfall or snowmelt can either runoff as infiltration/saturation excess or quick subsurface flow. The rest will be stored temporarily in the root zone. From the root zone, water can leave the catchment as evapotranspiration or percolate further and recharge deep storage (e.g. fractured bedrock aquifer). Quantifying the average amount of water that recharges deep storage and sustains low flows is extremely challenging, as we lack reliable methods to quantify this flux at the catchment scale. It was recently shown, however, that for semi-arid catchments in Mexico, an index of vegetation water use efficiency, i.e. the Horton index (HI), could predict deep storage dynamics. Here we test this finding using 247 MOPEX catchments across the conterminous US, including energy-limited catchments. Our results show that the observed HI is indeed a reliable predictor of deep storage dynamics in space and time. We further investigate whether the HI can also predict average recharge rates across the conterminous US. We find that the HI can reliably predict the average recharge rate, estimated from the 50th percentile flow of the flow duration curve. Our results compare favorably with estimates of average recharge rates from the US Geological Survey. Previous research has shown that HI can be reliably estimated based on aridity index, mean slope and mean elevation of a catchment (Voepel et al., 2011). We recalibrated Voepel's model and used it to predict the HI for our 247 catchments. We then used these predicted values of the HI to estimate average recharge rates for our catchments, and compared them with those estimated from observed HI. We find that the accuracies of our predictions based on observed and predicted HI are similar. This provides an estimation method of catchment-scale average recharge rates based on easily derived catchment characteristics, such as climate and topography, and free of discharge measurements.
A Simple Introduction to Moving Least Squares and Local Regression Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
O'Leary, D D; Lin, D C; Hughson, R L
1999-09-01
The heart rate component of the arterial baroreflex gain (BRG) was determined with auto-regressive moving-average (ARMA) analysis during each of spontaneous (SB) and random breathing (RB) protocols. Ten healthy subjects completed each breathing pattern on two different days in each of two different body positions, supine (SUP) and head-up tilt (HUT). The R-R interval, systolic arterial pressure (SAP) and instantaneous lung volume were recorded continuously. BRG was estimated from the ARMA impulse response relationship of R-R interval to SAP and from the spontaneous sequence method. The results indicated that both the ARMA and spontaneous sequence methods were reproducible (r = 0.76 and r = 0.85, respectively). As expected, BRG was significantly less in the HUT compared to SUP position for both ARMA (mean +/- SEM; 3.5 +/- 0.3 versus 11.2 +/- 1.4 ms mmHg-1; P < 0.01) and spontaneous sequence analysis (10.3 +/- 0.8 versus 31.5 +/- 2.3 ms mmHg-1; P < 0.001). However, no significant difference was found between BRG during RB and SB protocols for either ARMA (7.9 +/- 1.4 versus 6.7 +/- 0.8 ms mmHg-1; P = 0.27) or spontaneous sequence methods (21.8 +/- 2.7 versus 20.0 +/- 2.1 ms mmHg-1; P = 0.24). BRG was correlated during RB and SB protocols (r = 0.80; P < 0.0001). ARMA and spontaneous BRG estimates were correlated (r = 0.79; P < 0.0001), with spontaneous sequence values being consistently larger (P < 0.0001). In conclusion, we have shown that ARMA-derived BRG values are reproducible and that they can be determined during SB conditions, making the ARMA method appropriate for use in a wider range of patients.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
A (137)Cs erosion model with moving boundary.
Yin, Chuan; Ji, Hongbing
2015-12-01
A novel quantitative model of the relationship between diffused concentration changes and erosion rates using assessment of soil losses was developed. It derived from the analysis of surface soil (137)Cs flux variation under persistent erosion effect and based on the principle of geochemistry kinetics moving boundary. The new moving boundary model improves the basic simplified transport model (Zhang et al., 2008), and mainly applies to uniform rainfall areas which show a long-time soil erosion. The simulation results for this kind of erosion show under a long-time soil erosion, the influence of (137)Cs concentration will decrease exponentially with increasing depth. Using the new model fit to the measured (137)Cs depth distribution data in Zunyi site, Guizhou Province, China which has typical uniform rainfall provided a good fit with R(2) = 0.92. To compare the soil erosion rates calculated by the simple transport model and the new model, we take the Kaixian reference profile as example. The soil losses estimated by the previous simplified transport model are greater than those estimated by the new moving boundary model, which is consistent with our expectations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recovering bridge deflections from collocated acceleration and strain measurements
NASA Astrophysics Data System (ADS)
Bell, M.; Ma, T. W.; Xu, N. S.
2015-04-01
In this research, an internal model based method is proposed to estimate the displacement profile of a bridge subjected to a moving traffic load using a combination of acceleration and strain measurements. The structural response is assumed to be within the linear range. The deflection profile is assumed to be dominated by the fundamental mode of the bridge, therefore only requiring knowledge of the first mode. This still holds true under a multiple vehicle loading situation as the high mode shapes don't impact the over all response of the structure. Using the structural modal parameters and partial knowledge of the moving vehicle load, the internal models of the structure and the moving load can be respectively established, which can be used to form an autonomous state-space representation of the system. The structural displacements, velocities, and accelerations are the states of such a system, and it is fully observable when the measured output contains structural accelerations and strains. Reliable estimates of structural displacements are obtained using the standard Kalman filtering technique. The effectiveness and robustness of the proposed method has been demonstrated and evaluated via numerical simulation of a simply supported single span concrete bridge subjected to a moving traffic load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassianov, Evgueni I.; Barnard, James C.; Flynn, Connor J.
Areal-averaged albedos are particularly difficult to measure in coastal regions, because the surface is not homogenous, consisting of a sharp demarcation between land and water. With this difficulty in mind, we evaluate a simple retrieval of areal-averaged surface albedo using ground-based measurements of atmospheric transmission alone under fully overcast conditions. To illustrate the performance of our retrieval, we find the areal-averaged albedo using measurements from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (415, 500, 615, 673, and 870 nm). These MFRSR data are collected at a coastal site in Graciosa Island, Azores supported by the U.S. Department ofmore » Energy’s (DOE’s) Atmospheric Radiation Measurement (ARM) Program. The areal-averaged albedos obtained from the MFRSR are compared with collocated and coincident Moderate Resolution Imaging Spectroradiometer (MODIS) white-sky albedo at four nominal wavelengths (470, 560, 670 and 860 nm). These comparisons are made during a 19-month period (June 2009 - December 2010). We also calculate composite-based spectral values of surface albedo by a weighted-average approach using estimated fractions of major surface types observed in an area surrounding this coastal site. Taken as a whole, these three methods of finding albedo show spectral and temporal similarities, and suggest that our simple, transmission-based technique holds promise, but with estimated errors of about ±0.03. Additional work is needed to reduce this uncertainty in areas with inhomogeneous surfaces.« less
An Indoor Continuous Positioning Algorithm on the Move by Fusing Sensors and Wi-Fi on Smartphones.
Li, Huaiyu; Chen, Xiuwan; Jing, Guifei; Wang, Yuan; Cao, Yanfeng; Li, Fei; Zhang, Xinlong; Xiao, Han
2015-12-11
Wi-Fi indoor positioning algorithms experience large positioning error and low stability when continuously positioning terminals that are on the move. This paper proposes a novel indoor continuous positioning algorithm that is on the move, fusing sensors and Wi-Fi on smartphones. The main innovative points include an improved Wi-Fi positioning algorithm and a novel positioning fusion algorithm named the Trust Chain Positioning Fusion (TCPF) algorithm. The improved Wi-Fi positioning algorithm was designed based on the properties of Wi-Fi signals on the move, which are found in a novel "quasi-dynamic" Wi-Fi signal experiment. The TCPF algorithm is proposed to realize the "process-level" fusion of Wi-Fi and Pedestrians Dead Reckoning (PDR) positioning, including three parts: trusted point determination, trust state and positioning fusion algorithm. An experiment is carried out for verification in a typical indoor environment, and the average positioning error on the move is 1.36 m, a decrease of 28.8% compared to an existing algorithm. The results show that the proposed algorithm can effectively reduce the influence caused by the unstable Wi-Fi signals, and improve the accuracy and stability of indoor continuous positioning on the move.
Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone
2017-06-01
The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.
NASA Astrophysics Data System (ADS)
Hosseini, Seiyed Mossa; Ataie-Ashtiani, Behzad; Simmons, Craig T.
2018-04-01
Despite advancements in developing physics-based formulations to estimate the sheet-flow travel time (tSHF), the quantification of the relative impacts of influential parameters on tSHF has not previously been considered. In this study, a brief review of the physics-based formulations to estimate tSHF including kinematic wave (K-W) theory in combination with Manning's roughness (K-M) and with Darcy-Weisbach friction formula (K-D) over single and multiple planes is provided. Then, the relative significance of input parameters to the developed approaches is quantified by a density-based global sensitivity analysis (GSA). The performance of K-M considering zero-upstream and uniform flow depth (so-called K-M1 and K-M2), and K-D formulae to estimate the tSHF over single plane surface were assessed using several sets of experimental data collected from the previous studies. The compatibility of the developed models to estimate tSHF over multiple planes considering temporal rainfall distributions of Natural Resources Conservation Service, NRCS (I, Ia, II, and III) are scrutinized by several real-world examples. The results obtained demonstrated that the main controlling parameters of tSHF through K-D and K-M formulae are the length of surface plane (mean sensitivity index T̂i = 0.72) and flow resistance (mean T̂i = 0.52), respectively. Conversely, the flow temperature and initial abstraction ratio of rainfall have the lowest influence on tSHF (mean T̂i is 0.11 and 0.12, respectively). The significant role of the flow regime on the estimation of tSHF over a single and a cascade of planes are also demonstrated. Results reveal that the K-D formulation provides more precise tSHF over the single plane surface with an average percentage of error, APE equal to 9.23% (the APE for K-M1 and K-M2 formulae were 13.8%, and 36.33%, respectively). The superiority of Manning-jointed formulae in estimation of tSHF is due to the incorporation of effects from different flow regimes as flow moves downgradient that is affected by one or more factors including high excess rainfall intensities, low flow resistance, high degrees of imperviousness, long surfaces, steep slope, and domination of rainfall distribution as NRCS Type I, II, or III.
Matsui, Yasuhiro; Hitosugi, Masahito; Doi, Tsutomu; Oikawa, Shoko; Takahashi, Kunio; Ando, Kenichi
2013-01-01
The objective of this study is to evaluate the severe conditions between car-to-pedestrian near-miss situations using pedestrian time-to-vehicle (pedestrian TTV) which is the time when the pedestrian would reach the forward moving car line. Since the information available from the real-world accidents was limited, the authors focused on the near-miss situations captured by driving recorders installed in passenger cars. In their previous study, the authors found there were some similarities between accidents and near-miss incidents. It was made clear that the situations in pedestrians' accidents could be estimated from the near-miss incident data which included motion pictures capturing pedestrian behaviors. In their previous study, the vehicle time-to-collision (vehicle TTC) was investigated from the near-miss incident data. The authors analyzed data for 101 near-miss car-to-pedestrian incident events in which pedestrians were crossing the roads in front of a forward-moving car at an intersection or on a straight road. Using a video of near-miss car-to-pedestrian incidents captured by drive recorders and collected by the Society of Automotive Engineers of Japan (J-SAE) from 2005 to 2009, the pedestrian TTV was calculated. Based on the calculated pedestrian TTV, one of the severe conditions between car-to-pedestrian near-miss situations was evaluated for pedestrians who emerged from behind an obstruction such as a building, a parked vehicle and a moving vehicle. Focusing on the cases of the pedestrians who emerged from behind an obstruction, the averages of the vehicle TTC and pedestrian TTV were 1.31 and 1.05 seconds, respectively, and did not demonstrate a significant difference. Since the averages of the vehicle TTC and pedestrian TTV were similar, there would be a higher possibility of the contact between a car and pedestrian if the driver and pedestrian were not paying any attention. The authors propose that a moving speed of a pedestrian surrogate "dummy" should be determined considering the near-miss incident situations for the evaluation of a CDMBS for pedestrian detection. The authors also propose that the time-to-collision of the dummy to the tested car during the evaluation of the performance of the CDMBS for pedestrian detection should be determined considering the time such as the vehicle TTC in this study. Additionally or alternatively, the pedestrian TTV should be considered, in which the worst situation was assumed for a car that was moving toward a pedestrian without braking due to the car driver's inattentiveness and the pedestrian not slowing down their walking speed or stopping.
An improved moving average technical trading rule
NASA Astrophysics Data System (ADS)
Papailias, Fotis; Thomakos, Dimitrios D.
2015-06-01
This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
NASA Astrophysics Data System (ADS)
Ibarra Espinosa, S.; Ynoue, R.; Giannotti, M., , Dr
2017-12-01
It has been shown the importance of emissions inventories for air quality studies and environmental planning at local, regional (REAS), hemispheric (CLRTAP) and global (IPCC) scales. It has been shown also that vehicules are becoming the most important sources in urban centers. Several efforts has been made in order to model vehicular emissions to obtain more accurate emission factors based on Vehicular Specific Power (VPS) with IVE and MOVES based on VSP, MOBILE, VERSIT and COPERT based on average speed, or ARTEMIS and HBEFA based on traffic situations. However, little effort has been made to improve traffic activity data. In this study we are proposing using a novel approach to develop vehicular emissions inventory including point data from MAPLINK a company that feeds with traffic data to Google. This includes working and transforming massive amount of data to generate traffic flow and speeds. The region of study is the south east of Brazil including São Paulo metropolitan areas. To estimate vehicular emissions we are using the open source model VEIN available at https://CRAN.R-project.org/package=vein. We generated hourly traffic between 2010-04-21 and 2010-10-22, totalizing 145 hours. This data consists GPS readings from vehicles with assurance policy, applications and other sources. This type data presents spacial bias meaning that only a part of the vehicles are tracked. We corrected this bias using the calculated speed as proxy of traffic flow using measurements of traffic flow and speed per lane made in São Paulo. Then we calibrated the total traffic estimating Fuel Consumption with VEIN and comparing Fuel Sales for the region. We estimated the hourly vehicular emissions and produced emission maps and data-bases. In addition, we simulated atmospheric simulations using WRF-Chem to identify which inventory produces better agreement with air pollutant observations. New technologies and big data provides opportunities to improve vehicular emissions inventories.
Plasmoid growth and expulsion revealed by two-point ARTEMIS observations
NASA Astrophysics Data System (ADS)
Li, S.; Angelopoulos, V.; Runov, A.; kiehas, S.
2012-12-01
On 12 October 2011, the two ARTEMIS probes, in lunar orbit ~7 RE north of the neutral sheet, sequentially observed a tailward-moving, expanding plasmoid. Their observations reveal a multi-layered plasma sheet composed of tailward-flowing hot plasma within the plasmoid proper enshrouded by earthward-flowing, less energetic plasma. Prior observations of similar earthward flow structures ahead of or behind plasmoids have been interpreted as earthward outflow from a continuously active distant-tail neutral line (DNL) opposite an approaching plasmoid. However, no evidence of active DNL reconnection was observed by the probes as they traversed the plasmoid's leading and trailing edges, penetrating to slightly above its core. We suggest an alternate interpretation: compression of the ambient plasma by the tailward-moving plasmoid propels the plasma lobeward and earthward, i.e., over and above the plasmoid. Using the propagation velocity obtained from timing analysis, we estimate the average plasmoid size to be 9 RE and its expansion rate to be ~ 7 RE/min at the observation locations. The velocity inside the plasmoid proper was found to be non-uniform; the core likely moves as fast as 500 km/s, yet the outer layers move more slowly (and reverse direction), possibly resulting in the observed expansion. The absence of lobe reconnection, in particular on the earthward side, suggests that plasmoid formation and expulsion result from closed plasma sheet field line reconnection.
Population and Activity of On-road Vehicles in MOVES2014 ...
This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, lawnmowers, etc.) mobile sources. The national default activity information in MOVES2014 provides a reasonable basis for estimating national emissions. However, the uncertainties and variability in the default data contribute to the uncertainty in the resulting emission estimates. Properly characterizing emissions from the on-road vehicle subset requires a detailed understanding of the cars and trucks that make up the vehicle fleet and their patterns of operation. The MOVES model calculates emission inventories by multiplying emission rates by the appropriate emission-related activity, applying correction (adjustment) factors as needed to simulate specific situations, and then adding up the emissions from all sources (populations) and regions. This report describes the sources and derivation for on-road vehicle population and activity information and associated adjustments as stored in the MOVES2014 default databases. Motor Vehicle Emission Simulator, the MOVES2014 model, is a set of modeling tools for estimating emissions produced by on-road (cars, trucks, motorcycles, etc.) and nonroad (backhoes, law
Workplace smoking related absenteeism and productivity costs in Taiwan
Tsai, S; Wen, C; Hu, S; Cheng, T; Huang, S
2005-01-01
Objective: To estimate productivity losses and financial costs to employers caused by cigarette smoking in the Taiwan workplace. Methods: The human capital approach was used to calculate lost productivity. Assuming the value of lost productivity was equal to the wage/salary rate and basing the calculations on smoking rate in the workforce, average days of absenteeism, average wage/salary rate, and increased risk and absenteeism among smokers obtained from earlier research, costs due to smoker absenteeism were estimated. Financial losses caused by passive smoking, smoking breaks, and occupational injuries were calculated. Results: Using a conservative estimate of excess absenteeism from work, male smokers took off an average of 4.36 sick days and male non-smokers took off an average of 3.30 sick days. Female smokers took off an average of 4.96 sick days and non-smoking females took off an average of 3.75 sick days. Excess absenteeism caused by employee smoking was estimated to cost US$178 million per annum for males and US$6 million for females at a total cost of US$184 million per annum. The time men and women spent taking smoking breaks amounted to nine days per year and six days per year, respectively, resulting in reduced output productivity losses of US$733 million. Increased sick leave costs due to passive smoking were approximately US$81 million. Potential costs incurred from occupational injuries among smoking employees were estimated to be US$34 million. Conclusions: Financial costs caused by increased absenteeism and reduced productivity from employees who smoke are significant in Taiwan. Based on conservative estimates, total costs attributed to smoking in the workforce were approximately US$1032 million. PMID:15923446
Workplace smoking related absenteeism and productivity costs in Taiwan.
Tsai, S P; Wen, C P; Hu, S C; Cheng, T Y; Huang, S J
2005-06-01
To estimate productivity losses and financial costs to employers caused by cigarette smoking in the Taiwan workplace. The human capital approach was used to calculate lost productivity. Assuming the value of lost productivity was equal to the wage/salary rate and basing the calculations on smoking rate in the workforce, average days of absenteeism, average wage/salary rate, and increased risk and absenteeism among smokers obtained from earlier research, costs due to smoker absenteeism were estimated. Financial losses caused by passive smoking, smoking breaks, and occupational injuries were calculated. Using a conservative estimate of excess absenteeism from work, male smokers took off an average of 4.36 sick days and male non-smokers took off an average of 3.30 sick days. Female smokers took off an average of 4.96 sick days and non-smoking females took off an average of 3.75 sick days. Excess absenteeism caused by employee smoking was estimated to cost USD 178 million per annum for males and USD 6 million for females at a total cost of USD 184 million per annum. The time men and women spent taking smoking breaks amounted to nine days per year and six days per year, respectively, resulting in reduced output productivity losses of USD 733 million. Increased sick leave costs due to passive smoking were approximately USD 81 million. Potential costs incurred from occupational injuries among smoking employees were estimated to be USD 34 million. Financial costs caused by increased absenteeism and reduced productivity from employees who smoke are significant in Taiwan. Based on conservative estimates, total costs attributed to smoking in the workforce were approximately USD 1032 million.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
Reliability of reservoir firm yield determined from the historical drought of record
Archfield, S.A.; Vogel, R.M.
2005-01-01
The firm yield of a reservoir is typically defined as the maximum yield that could have been delivered without failure during the historical drought of record. In the future, reservoirs will experience droughts that are either more or less severe than the historical drought of record. The question addressed here is what the reliability of such systems will be when operated at the firm yield. To address this question, we examine the reliability of 25 hypothetical reservoirs sited across five locations in the central and western United States. These locations provided a continuous 756-month streamflow record spanning the same time interval. The firm yield of each reservoir was estimated from the historical drought of record at each location. To determine the steady-state monthly reliability of each firm-yield estimate, 12,000-month synthetic records were generated using the moving-blocks bootstrap method. Bootstrapping was repeated 100 times for each reservoir to obtain an average steady-state monthly reliability R, the number of months the reservoir did not fail divided by the total months. Values of R were greater than 0.99 for 60 percent of the study reservoirs; the other 40 percent ranged from 0.95 to 0.98. Estimates of R were highly correlated with both the level of development (ratio of firm yield to average streamflow) and average lag-1 monthly autocorrelation. Together these two predictors explained 92 percent of the variability in R, with the level of development alone explaining 85 percent of the variability. Copyright ASCE 2005.
A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China
Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin
2014-01-01
Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046
Influence of mobile phone traffic on base station exposure of the general public.
Joseph, Wout; Verloock, Leen
2010-11-01
The influence of mobile phone traffic on temporal radiofrequency exposure due to base stations during 7 d is compared for five different sites with Erlang data (representing average mobile phone traffic intensity during a period of time). The time periods of high exposure and high traffic during a day are compared and good agreement is obtained. The minimal required measurement periods to obtain accurate estimates for maximal and average long-period exposure (7 d) are determined. It is shown that these periods may be very long, indicating the necessity of new methodologies to estimate maximal and average exposure from short-period measurement data. Therefore, a new method to calculate the fields at a time instant from fields at another time instant using normalized Erlang values is proposed. This enables the estimation of maximal and average exposure during a week from short-period measurements using only Erlang data and avoids the necessity of long measurement times.
2014 Gulf of Mexico Hypoxia Forecast
Scavia, Donald; Evans, Mary Anne; Obenour, Dan
2014-01-01
The Gulf of Mexico annual summer hypoxia forecasts are based on average May total nitrogen loads from the Mississippi River basin for that year. The load estimate, recently released by USGS, is 4,761 metric tons per day. Based on that estimate, we predict the area of this summer’s hypoxic zone to be 14,000 square kilometers (95% credible interval, 8,000 to 20,000) – an “average year”. Our forecast hypoxic volume is 50 km3 (95% credible interval, 20 to 77).
REVIEW ARTICLE: Hither and yon: a review of bi-directional microtubule-based transport
NASA Astrophysics Data System (ADS)
Gross, Steven P.
2004-06-01
Active transport is critical for cellular organization and function, and impaired transport has been linked to diseases such as neuronal degeneration. Much long distance transport in cells uses opposite polarity molecular motors of the kinesin and dynein families to move cargos along microtubules. It is increasingly clear that many cargos are moved by both sets of motors, and frequently reverse course. This review compares this bi-directional transport to the more well studied uni-directional transport. It discusses some bi-directionally moving cargos, and critically evaluates three different physical models for how such transport might occur. It then considers the evidence for the number of active motors per cargo, and how the net or average direction of transport might be controlled. The likelihood of a complex linking the activities of kinesin and dynein is also discussed. The paper concludes by reviewing elements of apparent universality between different bi-directionally moving cargos and by briefly considering possible reasons for the existence of bi-directional transport.
Acoustic power of a moving point source in a moving medium
NASA Technical Reports Server (NTRS)
Cole, J. E., III; Sarris, I. I.
1976-01-01
The acoustic power output of a moving point-mass source in an acoustic medium which is in uniform motion and infinite in extent is examined. The acoustic medium is considered to be a homogeneous fluid having both zero viscosity and zero thermal conductivity. Two expressions for the acoustic power output are obtained based on a different definition cited in the literature for the average energy-flux vector in an acoustic medium in uniform motion. The acoustic power output of the source is found by integrating the component of acoustic intensity vector in the radial direction over the surface of an infinitely long cylinder which is within the medium and encloses the line of motion of the source. One of the power expressions is found to give unreasonable results even though the flow is uniform.
Improved modelling of ship SO 2 emissions—a fuel-based approach
NASA Astrophysics Data System (ADS)
Endresen, Øyvind; Bakke, Joachim; Sørgård, Eirik; Flatlandsmo Berglen, Tore; Holmvang, Per
Significant variations are apparent between the various reported regional and global ship SO 2 emission inventories. Important parameters for SO 2 emission modelling are sulphur contents and marine fuel consumption. Since 1993, the global average sulphur content for heavy fuel has shown an overall downward trend, while the bunker sale has increased. We present an improved bottom up approach to estimate marine sulphur emissions from ship transportation, including the geographical distribution. More than 53,000 individual bunker samples are used to establish regionally and globally (volume) weighted average sulphur contents for heavy and distillate marine fuels. We find that the year 2002 sulphur content in heavy fuels varies regionally from 1.90% (South America) to 3.07% (Asia), with a globally weighted average of 2.68% sulphur. The calculated globally weighted average content for heavy fuels is found to be 5% higher than the average (arithmetic mean) sulphur content commonly used. The reason for this is likely that larger bunker stems are mainly of high-viscosity heavy fuel, which tends to have higher sulphur values compared to lower viscosity fuels. The uncertainties in SO 2 inventories are significantly reduced using our updated SO 2 emission factors (volume-weighted sulphur content). Regional marine bunker sales figures are combined with volume-weighted sulphur contents for each region to give a global SO 2 emission estimate in the range of 5.9-7.2 Tg (SO 2) for international marine transportation. Also taking into account the domestic sales, the total emissions from all ocean-going transportation is estimated to be 7.0-8.5 Tg (SO 2). Our estimate is significantly lower than recent global estimate reported by Corbett and Koehler [2003. Journal of Geophysical Research: Atmospheres 108] (6.49 Tg S or about 13.0 Tg SO 2). Endresen et al. [2004. Journal of Geophysical Research 109, D23302] claim that uncertainties in input data for the activity-based method will give too high emission estimates. We also indicate that this higher estimate will almost give doubling of regional emissions, compared to detailed movement-based estimates. The paper presents an alternative approach to estimate present overall SO 2 ship emissions with improved accuracy.
Ambient temperature and biomarkers of heart failure: a repeated measures analysis.
Wilker, Elissa H; Yeh, Gloria; Wellenius, Gregory A; Davis, Roger B; Phillips, Russell S; Mittleman, Murray A
2012-08-01
Extreme temperatures have been associated with hospitalization and death among individuals with heart failure, but few studies have explored the underlying mechanisms. We hypothesized that outdoor temperature in the Boston, Massachusetts, area (1- to 4-day moving averages) would be associated with higher levels of biomarkers of inflammation and myocyte injury in a repeated-measures study of individuals with stable heart failure. We analyzed data from a completed clinical trial that randomized 100 patients to 12 weeks of tai chi classes or to time-matched education control. B-type natriuretic peptide (BNP), C-reactive protein (CRP), and tumor necrosis factor (TNF) were measured at baseline, 6 weeks, and 12 weeks. Endothelin-1 was measured at baseline and 12 weeks. We used fixed effects models to evaluate associations with measures of temperature that were adjusted for time-varying covariates. Higher apparent temperature was associated with higher levels of BNP beginning with 2-day moving averages and reached statistical significance for 3- and 4-day moving averages. CRP results followed a similar pattern but were delayed by 1 day. A 5°C change in 3- and 4-day moving averages of apparent temperature was associated with 11.3% [95% confidence interval (CI): 1.1, 22.5; p = 0.03) and 11.4% (95% CI: 1.2, 22.5; p = 0.03) higher BNP. A 5°C change in the 4-day moving average of apparent temperature was associated with 21.6% (95% CI: 2.5, 44.2; p = 0.03) higher CRP. No clear associations with TNF or endothelin-1 were observed. Among patients undergoing treatment for heart failure, we observed positive associations between temperature and both BNP and CRP-predictors of heart failure prognosis and severity.
Food price seasonality in Africa: Measurement and extent.
Gilbert, Christopher L; Christiaensen, Luc; Kaminski, Jonathan
2017-02-01
Everyone knows about seasonality. But what exactly do we know? This study systematically measures seasonal price gaps at 193 markets for 13 food commodities in seven African countries. It shows that the commonly used dummy variable or moving average deviation methods to estimate the seasonal gap can yield substantial upward bias. This can be partially circumvented using trigonometric and sawtooth models, which are more parsimonious. Among staple crops, seasonality is highest for maize (33 percent on average) and lowest for rice (16½ percent). This is two and a half to three times larger than in the international reference markets. Seasonality varies substantially across market places but maize is the only crop in which there are important systematic country effects. Malawi, where maize is the main staple, emerges as exhibiting the most acute seasonal differences. Reaching the Sustainable Development Goal of Zero Hunger requires renewed policy attention to seasonality in food prices and consumption.
Influenza forecasting with Google Flu Trends.
Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E
2013-01-01
We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
NASA Technical Reports Server (NTRS)
Pongratz, M.
1972-01-01
Results from a Nike-Tomahawk sounding rocket flight launched from Fort Churchill are presented. The rocket was launched into a breakup aurora at magnetic local midnight on 21 March 1968. The rocket was instrumented to measure electrons with an electrostatic analyzer electron spectrometer which made 29 measurements in the energy interval 0.5 KeV to 30 KeV. Complete energy spectra were obtained at a rate of 10/sec. Pitch angle information is presented via 3 computed average per rocket spin. The dumped electron average corresponds to averages over electrons moving nearly parallel to the B vector. The mirroring electron average corresponds to averages over electrons moving nearly perpendicular to the B vector. The average was also computed over the entire downward hemisphere (the precipitated electron average). The observations were obtained in an altitude range of 10 km at 230 km altitude.
A 12-Year Analysis of Nonbattle Injury Among US Service Members Deployed to Iraq and Afghanistan.
Le, Tuan D; Gurney, Jennifer M; Nnamani, Nina S; Gross, Kirby R; Chung, Kevin K; Stockinger, Zsolt T; Nessen, Shawn C; Pusateri, Anthony E; Akers, Kevin S
2018-05-30
Nonbattle injury (NBI) among deployed US service members increases the burden on medical systems and results in high rates of attrition, affecting the available force. The possible causes and trends of NBI in the Iraq and Afghanistan wars have, to date, not been comprehensively described. To describe NBI among service members deployed to Iraq and Afghanistan, quantify absolute numbers of NBIs and proportion of NBIs within the Department of Defense Trauma Registry, and document the characteristics of this injury category. In this retrospective cohort study, data from the Department of Defense Trauma Registry on 29 958 service members injured in Iraq and Afghanistan from January 1, 2003, through December 31, 2014, were obtained. Injury incidence, patterns, and severity were characterized by battle injury and NBI. Trends in NBI were modeled using time series analysis with autoregressive integrated moving average and the weighted moving average method. Statistical analysis was performed from January 1, 2003, to December 31, 2014. Primary outcomes were proportion of NBIs and the changes in NBI over time. Among 29 958 casualties (battle injury and NBI) analyzed, 29 003 were in men and 955 were in women; the median age at injury was 24 years (interquartile range, 21-29 years). Nonbattle injury caused 34.1% of total casualties (n = 10 203) and 11.5% of all deaths (206 of 1788). Rates of NBI were higher among women than among men (63.2% [604 of 955] vs 33.1% [9599 of 29 003]; P < .001) and in Operation New Dawn (71.0% [298 of 420]) and Operation Iraqi Freedom (36.3% [6655 of 18 334]) compared with Operation Enduring Freedom (29.0% [3250 of 11 204]) (P < .001). A higher proportion of NBIs occurred in members of the Air Force (66.3% [539 of 810]) and Navy (48.3% [394 of 815]) than in members of the Army (34.7% [7680 of 22 154]) and Marine Corps (25.7% [1584 of 6169]) (P < .001). Leading mechanisms of NBI included falls (2178 [21.3%]), motor vehicle crashes (1921 [18.8%]), machinery or equipment accidents (1283 [12.6%]), blunt objects (1107 [10.8%]), gunshot wounds (728 [7.1%]), and sports (697 [6.8%]), causing predominantly blunt trauma (7080 [69.4%]). The trend in proportion of NBIs did not decrease over time, remaining at approximately 35% (by weighted moving average) after 2006 and approximately 39% by autoregressive integrated moving average. Assuming stable battlefield conditions, the autoregressive integrated moving average model estimated that the proportion of NBIs from 2015 to 2022 would be approximately 41.0% (95% CI, 37.8%-44.3%). In this study, approximately one-third of injuries during the Iraq and Afghanistan wars resulted from NBI, and the proportion of NBIs was steady for 12 years. Understanding the possible causes of NBI during military operations may be useful to target protective measures and safety interventions, thereby conserving fighting strength on the battlefield.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
NASA Technical Reports Server (NTRS)
Vanlunteren, A.
1977-01-01
A previously described parameter estimation program was applied to a number of control tasks, each involving a human operator model consisting of more than one describing function. One of these experiments is treated in more detail. It consisted of a two dimensional tracking task with identical controlled elements. The tracking errors were presented on one display as two vertically moving horizontal lines. Each loop had its own manipulator. The two forcing functions were mutually independent and consisted each of 9 sine waves. A human operator model was chosen consisting of 4 describing functions, thus taking into account possible linear cross couplings. From the Fourier coefficients of the relevant signals the model parameters were estimated after alignment, averaging over a number of runs and decoupling. The results show that for the elements in the main loops the crossover model applies. A weak linear cross coupling existed with the same dynamics as the elements in the main loops but with a negative sign.
NASA Astrophysics Data System (ADS)
Helge Østerås, Bjørn; Skaane, Per; Gullien, Randi; Catrine Trægde Martinsen, Anne
2018-02-01
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra™). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra™. AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
Østerås, Bjørn Helge; Skaane, Per; Gullien, Randi; Martinsen, Anne Catrine Trægde
2018-01-25
The main purpose was to compare average glandular dose (AGD) for same-compression digital mammography (DM) and digital breast tomosynthesis (DBT) acquisitions in a population based screening program, with and without breast density stratification, as determined by automatically calculated breast density (Quantra ™ ). Secondary, to compare AGD estimates based on measured breast density, air kerma and half value layer (HVL) to DICOM metadata based estimates. AGD was estimated for 3819 women participating in the screening trial. All received craniocaudal and mediolateral oblique views of each breasts with paired DM and DBT acquisitions. Exposure parameters were extracted from DICOM metadata. Air kerma and HVL were measured for all beam qualities used to acquire the mammograms. Volumetric breast density was estimated using Quantra ™ . AGD was estimated using the Dance model. AGD reported directly from the DICOM metadata was also assessed. Mean AGD was 1.74 and 2.10 mGy for DM and DBT, respectively. Mean DBT/DM AGD ratio was 1.24. For fatty breasts: mean AGD was 1.74 and 2.27 mGy for DM and DBT, respectively. For dense breasts: mean AGD was 1.73 and 1.79 mGy, for DM and DBT, respectively. For breasts of similar thickness, dense breasts had higher AGD for DM and similar AGD for DBT. The DBT/DM dose ratio was substantially lower for dense compared to fatty breasts (1.08 versus 1.33). The average c-factor was 1.16. Using previously published polynomials to estimate glandularity from thickness underestimated the c-factor by 5.9% on average. Mean AGD error between estimates based on measurements (air kerma and HVL) versus DICOM header data was 3.8%, but for one mammography unit as high as 7.9%. Mean error of using the AGD value reported in the DICOM header was 10.7 and 13.3%, respectively. Thus, measurement of breast density, radiation dose and beam quality can substantially affect AGD estimates.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouyang, L; Lee, H; Wang, J
2014-06-01
Purpose: To evaluate a moving-blocker-based approach in estimating and correcting megavoltage (MV) and kilovoltage (kV) scatter contamination in kV cone-beam computed tomography (CBCT) acquired during volumetric modulated arc therapy (VMAT). Methods: XML code was generated to enable concurrent CBCT acquisition and VMAT delivery in Varian TrueBeam developer mode. A physical attenuator (i.e., “blocker”) consisting of equal spaced lead strips (3.2mm strip width and 3.2mm gap in between) was mounted between the x-ray source and patient at a source to blocker distance of 232mm. The blocker was simulated to be moving back and forth along the gantry rotation axis during themore » CBCT acquisition. Both MV and kV scatter signal were estimated simultaneously from the blocked regions of the imaging panel, and interpolated into the un-blocked regions. Scatter corrected CBCT was then reconstructed from un-blocked projections after scatter subtraction using an iterative image reconstruction algorithm based on constraint optimization. Experimental studies were performed on a Catphan 600 phantom and an anthropomorphic pelvis phantom to demonstrate the feasibility of using moving blocker for MV-kV scatter correction. Results: MV scatter greatly degrades the CBCT image quality by increasing the CT number inaccuracy and decreasing the image contrast, in addition to the shading artifacts caused by kV scatter. The artifacts were substantially reduced in the moving blocker corrected CBCT images in both Catphan and pelvis phantoms. Quantitatively, CT number error in selected regions of interest reduced from 377 in the kV-MV contaminated CBCT image to 38 for the Catphan phantom. Conclusions: The moving-blockerbased strategy can successfully correct MV and kV scatter simultaneously in CBCT projection data acquired with concurrent VMAT delivery. This work was supported in part by a grant from the Cancer Prevention and Research Institute of Texas (RP130109) and a grant from the American Cancer Society (RSG-13-326-01-CCE)« less
Neural net forecasting for geomagnetic activity
NASA Technical Reports Server (NTRS)
Hernandez, J. V.; Tajima, T.; Horton, W.
1993-01-01
We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).
Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.
Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat
2014-01-01
The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trenberth, Kevin E.; Fasullo, John T.
The Atlantic Meridional Overturning Circulation plays a major role in moving heat and carbon around in the ocean. A new estimate of ocean heat transports for 2000 through 2013 throughout the Atlantic is derived. Top-of-atmosphere radiation is combined with atmospheric reanalyses to estimate surface heat fluxes and combined with vertically integrated ocean heat content to estimate ocean heat transport divergence as a residual. Atlantic peak northward ocean heat transports average 1.18 ± 0.13PW (1 sigma) at 15°N but vary considerably in latitude and time. Results agree well with observational estimates at 26.5°N from the RAPID array, but for 2004–2013 themore » meridional heat transport is 1.00 ± 0.11PW versus 1.23 ± 0.11PW for RAPID. In addition, these results have no hint of a trend, unlike the RAPID results. Finally, strong westerlies north of a meridian drive ocean currents and an ocean heat loss into the atmosphere that is exacerbated by a decrease in ocean heat transport northward.« less
Trenberth, Kevin E.; Fasullo, John T.
2017-02-18
The Atlantic Meridional Overturning Circulation plays a major role in moving heat and carbon around in the ocean. A new estimate of ocean heat transports for 2000 through 2013 throughout the Atlantic is derived. Top-of-atmosphere radiation is combined with atmospheric reanalyses to estimate surface heat fluxes and combined with vertically integrated ocean heat content to estimate ocean heat transport divergence as a residual. Atlantic peak northward ocean heat transports average 1.18 ± 0.13PW (1 sigma) at 15°N but vary considerably in latitude and time. Results agree well with observational estimates at 26.5°N from the RAPID array, but for 2004–2013 themore » meridional heat transport is 1.00 ± 0.11PW versus 1.23 ± 0.11PW for RAPID. In addition, these results have no hint of a trend, unlike the RAPID results. Finally, strong westerlies north of a meridian drive ocean currents and an ocean heat loss into the atmosphere that is exacerbated by a decrease in ocean heat transport northward.« less
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction.
Yu, Nannan; Wu, Lingling; Zou, Dexuan; Chen, Ying; Lu, Hanbing
2017-01-01
In this paper, we propose a novel method for solving the single-trial evoked potential (EP) estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX). The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.
NASA Astrophysics Data System (ADS)
Witt, Thomas J.; Fletcher, N. E.
2010-10-01
We investigate some statistical properties of ac voltages from a white noise source measured with a digital lock-in amplifier equipped with finite impulse response output filters which introduce correlations between successive voltage values. The main goal of this work is to propose simple solutions to account for correlations when calculating the standard deviation of the mean (SDM) for a sequence of measurement data acquired using such an instrument. The problem is treated by time series analysis based on a moving average model of the filtering process. Theoretical expressions are derived for the power spectral density (PSD), the autocorrelation function, the equivalent noise bandwidth and the Allan variance; all are related to the SDM. At most three parameters suffice to specify any of the above quantities: the filter time constant, the time between successive measurements (both set by the lock-in operator) and the PSD of the white noise input, h0. Our white noise source is a resistor so that the PSD is easily calculated; there are no free parameters. Theoretical expressions are checked against their respective sample estimates and, with the exception of two of the bandwidth estimates, agreement to within 11% or better is found.
Evaluation of scaling invariance embedded in short time series.
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2). Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03) and sharp confidential interval (standard deviation ≤0.05). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.
Evaluation of Scaling Invariance Embedded in Short Time Series
Pan, Xue; Hou, Lei; Stephen, Mutua; Yang, Huijie; Zhu, Chenping
2014-01-01
Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length . Calculations with specified Hurst exponent values of show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias () and sharp confidential interval (standard deviation ). Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records. PMID:25549356
The AFIS tree growth model for updating annual forest inventories in Minnesota
Margaret R. Holdaway
2000-01-01
As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...
Code of Federal Regulations, 2013 CFR
2013-10-01
... ACL, as specified in paragraph (a)(1) of this section for Puerto Rico management area species or... ensure landings do not exceed the applicable ACL. If NMFS determines the ACL for a particular species or... relative to the applicable ACL based on a moving multi-year average of landings, as described in the FMP...
Code of Federal Regulations, 2014 CFR
2014-10-01
... ACL, as specified in paragraph (a)(1) of this section for Puerto Rico management area species or... ensure landings do not exceed the applicable ACL. If NMFS determines the ACL for a particular species or... relative to the applicable ACL based on a moving multi-year average of landings, as described in the FMP...
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow
Layton, Oliver W.; Fajen, Brett R.
2016-01-01
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
NASA Astrophysics Data System (ADS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter
NASA Astrophysics Data System (ADS)
Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu
2017-05-01
Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.
MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851
MARD—A moving average rose diagram application for the geosciences
NASA Astrophysics Data System (ADS)
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Emission inventory estimation of an intercity bus terminal.
Qiu, Zhaowen; Li, Xiaoxia; Hao, Yanzhao; Deng, Shunxi; Gao, H Oliver
2016-06-01
Intercity bus terminals are hotspots of air pollution due to concentrated activities of diesel buses. In order to evaluate the bus terminals' impact on air quality, it is necessary to estimate the associated mobile emission inventories. Since the vehicles' operating condition at the bus terminal varies significantly, conventional calculation of the emissions based on average emission factors suffers the loss of accuracy. In this study, we examined a typical intercity bus terminal-the Southern City Bus Station of Xi'an, China-using a multi-scale emission model-(US EPA's MOVES model)-to quantity the vehicle emission inventory. A representative operating cycle for buses within the station is constructed. The emission inventory was then estimated using detailed inputs including vehicle ages, operating speeds, operating schedules, and operating mode distribution, as well as meteorological data (temperature and humidity). Five functional areas (bus yard, platforms, disembarking area, bus travel routes within the station, and bus entrance/exit routes) at the terminal were identified, and the bus operation cycle was established using the micro-trip cycle construction method. Results of our case study showed that switching to compressed natural gas (CNG) from diesel fuel could reduce PM2.5 and CO emissions by 85.64 and 6.21 %, respectively, in the microenvironment of the bus terminal. When CNG is used, tail pipe exhaust PM2.5 emission is significantly reduced, even less than brake wear PM2.5. The estimated bus operating cycles can also offer researchers and policy makers important information for emission evaluation in the planning and design of any typical intercity bus terminals of a similar scale.
Experimental Evaluation of UWB Indoor Positioning for Sport Postures
Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli
2018-01-01
Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267
Gale, Sara L.; Noth, Elizabeth M.; Mann, Jennifer; Balmes, John; Hammond, S. Katharine; Tager, Ira B.
2014-01-01
Polycyclic aromatic hydrocarbons (PAHs) are found widely in the ambient air and result from combustion of various fuels and industrial processes. PAHs have been associated with adverse human health effects such as cognitive development, childhood IQ, and respiratory health. The Fresno Asthmatic Children’s Environment Study (FACES) enrolled 315 children ages 6-11 years with asthma in Fresno, CA and followed the cohort from 2000 to 2008. Subjects were evaluated for asthma symptoms in up to three 14-day panels per year. Detailed ambient pollutant concentrations were collected from a central site and outdoor pollutants were measured at 83 homes for at least one 5-day period. Measurements of particle-bound PAHs were used with land use regression models to estimate individual exposures to PAHs with 4-, 5- or 6-member rings (PAH456) and phenanthrene for the cohort (approximately 22 000 individual daily estimates). We used a cross-validation based algorithm for model fitting and a generalized estimated equation approach to account for repeated measures. Multiple lags and moving averages of PAH exposure were associated with increased wheeze for each of the three types of PAH exposure estimates. The odds ratios for asthmatics exposed to PAHs (ng/m3) ranged from 1.01 (95% CI, 1.00-1.02) to 1.10 (95% CI, 1.04-1.17)]. This trend for increased wheeze persisted among all PAHs measured. Phenanthrene was found to have a higher relative impact on wheeze. These data provide further evidence that PAHs contribute to asthma morbidity. PMID:22549720
Gale, Sara L; Noth, Elizabeth M; Mann, Jennifer; Balmes, John; Hammond, S Katharine; Tager, Ira B
2012-07-01
Polycyclic aromatic hydrocarbons (PAHs) are found widely in the ambient air and result from combustion of various fuels and industrial processes. PAHs have been associated with adverse human health effects such as cognitive development, childhood IQ, and respiratory health. The Fresno Asthmatic Children's Environment Study enrolled 315 children aged 6-11 years with asthma in Fresno, CA and followed the cohort from 2000 to 2008. Subjects were evaluated for asthma symptoms in up to three 14-day panels per year. Detailed ambient pollutant concentrations were collected from a central site and outdoor pollutants were measured at 83 homes for at least one 5-day period. Measurements of particle-bound PAHs were used with land-use regression models to estimate individual exposures to PAHs with 4-, 5-, or 6-member rings (PAH456) and phenanthrene for the cohort (approximately 22,000 individual daily estimates). We used a cross-validation-based algorithm for model fitting and a generalized estimated equation approach to account for repeated measures. Multiple lags and moving averages of PAH exposure were associated with increased wheeze for each of the three types of PAH exposure estimates. The odds ratios for asthmatics exposed to PAHs (ng/m(3)) ranged from 1.01 (95% CI, 1.00-1.02) to 1.10 (95% CI, 1.04-1.17). This trend for increased wheeze persisted among all PAHs measured. Phenanthrene was found to have a higher relative impact on wheeze. These data provide further evidence that PAHs contribute to asthma morbidity.
Favato, Giampiero; Mariani, Paolo; Mills, Roger W.; Capone, Alessandro; Pelagatti, Matteo; Pieri, Vasco; Marcobelli, Alberico; Trotta, Maria G.; Zucchi, Alberto; Catapano, Alberico L.
2007-01-01
Background The primary objective of this study was to make the first step in the modelling of pharmaceutical demand in Italy, by deriving a weighted capitation model to account for demographic differences among general practices. The experimental model was called ASSET (Age/Sex Standardised Estimates of Treatment). Methods and Major Findings Individual prescription costs and demographic data referred to 3,175,691 Italian subjects and were collected directly from three Regional Health Authorities over the 12-month period between October 2004 and September 2005. The mean annual prescription cost per individual was similar for males (196.13 euro) and females (195.12 euro). After 65 years of age, the mean prescribing costs for males were significantly higher than females. On average, costs for a 75-year-old subject would be 12 times the costs for a 25–34 year-old subject if male, 8 times if female. Subjects over 65 years of age (22% of total population) accounted for 56% of total prescribing costs. The weightings explained approximately 90% of the evolution of total prescribing costs, in spite of the pricing and reimbursement turbulences affecting Italy in the 2000–2005 period. The ASSET weightings were able to explain only about 25% of the variation in prescribing costs among individuals. Conclusions If mainly idiosyncratic prescribing by general practitioners causes the unexplained variations, the introduction of capitation-based budgets would gradually move practices with high prescribing costs towards the national average. It is also possible, though, that the unexplained individual variation in prescribing costs is the result of differences in the clinical characteristics or socio-economic conditions of practice populations. If this is the case, capitation-based budgets may lead to unfair distribution of resources. The ASSET age/sex weightings should be used as a guide, not as the ultimate determinant, for an equitable allocation of prescribing resources to regional authorities and general practices. PMID:17611624
Favato, Giampiero; Mariani, Paolo; Mills, Roger W; Capone, Alessandro; Pelagatti, Matteo; Pieri, Vasco; Marcobelli, Alberico; Trotta, Maria G; Zucchi, Alberto; Catapano, Alberico L
2007-07-04
The primary objective of this study was to make the first step in the modelling of pharmaceutical demand in Italy, by deriving a weighted capitation model to account for demographic differences among general practices. The experimental model was called ASSET (Age/Sex Standardised Estimates of Treatment). Individual prescription costs and demographic data referred to 3,175,691 Italian subjects and were collected directly from three Regional Health Authorities over the 12-month period between October 2004 and September 2005. The mean annual prescription cost per individual was similar for males (196.13 euro) and females (195.12 euro). After 65 years of age, the mean prescribing costs for males were significantly higher than females. On average, costs for a 75-year-old subject would be 12 times the costs for a 25-34 year-old subject if male, 8 times if female. Subjects over 65 years of age (22% of total population) accounted for 56% of total prescribing costs. The weightings explained approximately 90% of the evolution of total prescribing costs, in spite of the pricing and reimbursement turbulences affecting Italy in the 2000-2005 period. The ASSET weightings were able to explain only about 25% of the variation in prescribing costs among individuals. If mainly idiosyncratic prescribing by general practitioners causes the unexplained variations, the introduction of capitation-based budgets would gradually move practices with high prescribing costs towards the national average. It is also possible, though, that the unexplained individual variation in prescribing costs is the result of differences in the clinical characteristics or socio-economic conditions of practice populations. If this is the case, capitation-based budgets may lead to unfair distribution of resources. The ASSET age/sex weightings should be used as a guide, not as the ultimate determinant, for an equitable allocation of prescribing resources to regional authorities and general practices.
In vivo validation of patellofemoral kinematics during overground gait and stair ascent.
Pitcairn, Samuel; Lesniak, Bryson; Anderst, William
2018-06-18
The patellofemoral (PF) joint is a common site for non-specific anterior knee pain. The pathophysiology of patellofemoral pain may be related to abnormal motion of the patella relative to the femur, leading to increased stress at the patellofemoral joint. Patellofemoral motion cannot be accurately measured using conventional motion capture. The aim of this study was to determine the accuracy of a biplane radiography system for measuring in vivo PF motion during walking and stair ascent. Four subjects had three 1.0 mm diameter tantalum beads implanted into the patella. Participants performed three trials each of over ground walking and stair ascent while biplane radiographs were collected at 100 Hz. Patella motion was tracked using radiostereophotogrammetric analysis (RSA) as a "gold standard", and compared to a volumetric CT model-based tracking algorithm that matched digitally reconstructed radiographs to the original biplane radiographs. The average RMS difference between the RSA and model-based tracking was 0.41 mm and 1.97° when there was no obstruction from the contralateral leg. These differences increased by 34% and 40%, respectively, when the patella was at least partially obstructed by the contralateral leg. The average RMS difference in patellofemoral joint space between tracking methods was 0.9 mm or less. Previous validations of biplane radiographic systems have estimated tracking accuracy by moving cadaveric knees through simulated motions. These validations were unable to replicate in vivo kinematics, including patella motion due to muscle activation, and failed to assess the imaging and tracking challenges related to contralateral limb obstruction. By replicating the muscle contraction, movement velocity, joint range of motion, and obstruction of the patella by the contralateral limb, the present study provides a realistic estimate of patellofemoral tracking accuracy for future in vivo studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Calibrating recruitment estimates for mourning doves from harvest age ratios
Miller, David A.; Otis, David L.
2010-01-01
We examined results from the first national-scale effort to estimate mourning dove (Zenaida macroura) age ratios and developed a simple, efficient, and generalizable methodology for calibrating estimates. Our method predicted age classes of unknown-age wings based on backward projection of molt distributions from fall harvest collections to preseason banding. We estimated 1) the proportion of late-molt individuals in each age class, and 2) the molt rates of juvenile and adult birds. Monte Carlo simulations demonstrated our estimator was minimally biased. We estimated model parameters using 96,811 wings collected from hunters and 42,189 birds banded during preseason from 68 collection blocks in 22 states during the 2005–2007 hunting seasons. We also used estimates to derive a correction factor, based on latitude and longitude of samples, which can be applied to future surveys. We estimated differential vulnerability of age classes to harvest using data from banded birds and applied that to harvest age ratios to estimate population age ratios. Average, uncorrected age ratio of known-age wings for states that allow hunting was 2.25 (SD 0.85) juveniles:adult, and average, corrected ratio was 1.91 (SD 0.68), as determined from harvest age ratios from an independent sample of 41,084 wings collected from random hunters in 2007 and 2008. We used an independent estimate of differential vulnerability to adjust corrected harvest age ratios and estimated the average population age ratio as 1.45 (SD 0.52), a direct measure of recruitment rates. Average annual recruitment rates were highest east of the Mississippi River and in the northwestern United States, with lower rates between. Our results demonstrate a robust methodology for calibrating recruitment estimates for mourning doves and represent the first large-scale estimates of recruitment for the species. Our methods can be used by managers to correct future harvest survey data to generate recruitment estimates for use in formulating harvest management strategies.
Detection and imaging of moving objects with SAR by a joint space-time-frequency processing
NASA Astrophysics Data System (ADS)
Barbarossa, Sergio; Farina, Alfonso
This paper proposes a joint spacetime-frequency processing scheme for the detection and imaging of moving targets by Synthetic Aperture Radars (SAR). The method is based on the availability of an array antenna. The signals received by the array elements are combined, in a spacetime processor, to cancel the clutter. Then, they are analyzed in the time-frequency domain, by computing their Wigner-Ville Distribution (WVD), in order to estimate the instantaneous frequency, to be used for the successive phase compensation, necessary to produce a high resolution image.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation
NASA Astrophysics Data System (ADS)
Ekin Aydin, Boran; Rutten, Martine
2016-04-01
Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.
Korenromp, Eline L; Mahiané, Guy; Rowley, Jane; Nagelkerke, Nico; Abu-Raddad, Laith; Ndowa, Francis; El-Kettani, Amina; El-Rhilani, Houssine; Mayaud, Philippe; Chico, R Matthew; Pretorius, Carel; Hecht, Kendall; Wi, Teodora
2017-12-01
To develop a tool for estimating national trends in adult prevalence of sexually transmitted infections by low- and middle-income countries, using standardised, routinely collected programme indicator data. The Spectrum-STI model fits time trends in the prevalence of active syphilis through logistic regression on prevalence data from antenatal clinic-based surveys, routine antenatal screening and general population surveys where available, weighting data by their national coverage and representativeness. Gonorrhoea prevalence was fitted as a moving average on population surveys (from the country, neighbouring countries and historic regional estimates), with trends informed additionally by urethral discharge case reports, where these were considered to have reasonably stable completeness. Prevalence data were adjusted for diagnostic test performance, high-risk populations not sampled, urban/rural and male/female prevalence ratios, using WHO's assumptions from latest global and regional-level estimations. Uncertainty intervals were obtained by bootstrap resampling. Estimated syphilis prevalence (in men and women) declined from 1.9% (95% CI 1.1% to 3.4%) in 2000 to 1.5% (1.3% to 1.8%) in 2016 in Zimbabwe, and from 1.5% (0.76% to 1.9%) to 0.55% (0.30% to 0.93%) in Morocco. At these time points, gonorrhoea estimates for women aged 15-49 years were 2.5% (95% CI 1.1% to 4.6%) and 3.8% (1.8% to 6.7%) in Zimbabwe; and 0.6% (0.3% to 1.1%) and 0.36% (0.1% to 1.0%) in Morocco, with male gonorrhoea prevalences 14% lower than female prevalence. This epidemiological framework facilitates data review, validation and strategic analysis, prioritisation of data collection needs and surveillance strengthening by national experts. We estimated ongoing syphilis declines in both Zimbabwe and Morocco. For gonorrhoea, time trends were less certain, lacking recent population-based surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Time series trends of the safety effects of pavement resurfacing.
Park, Juneyoung; Abdel-Aty, Mohamed; Wang, Jung-Han
2017-04-01
This study evaluated the safety performance of pavement resurfacing projects on urban arterials in Florida using the observational before and after approaches. The safety effects of pavement resurfacing were quantified in the crash modification factors (CMFs) and estimated based on different ranges of heavy vehicle traffic volume and time changes for different severity levels. In order to evaluate the variation of CMFs over time, crash modification functions (CMFunctions) were developed using nonlinear regression and time series models. The results showed that pavement resurfacing projects decrease crash frequency and are found to be more safety effective to reduce severe crashes in general. Moreover, the results of the general relationship between the safety effects and time changes indicated that the CMFs increase over time after the resurfacing treatment. It was also found that pavement resurfacing projects for the urban roadways with higher heavy vehicle volume rate are more safety effective than the roadways with lower heavy vehicle volume rate. Based on the exploration and comparison of the developed CMFucntions, the seasonal autoregressive integrated moving average (SARIMA) and exponential functional form of the nonlinear regression models can be utilized to identify the trend of CMFs over time. Copyright © 2017 Elsevier Ltd. All rights reserved.
A binary motor imagery tasks based brain-computer interface for two-dimensional movement control
NASA Astrophysics Data System (ADS)
Xia, Bin; Cao, Lei; Maysam, Oladazimi; Li, Jie; Xie, Hong; Su, Caixia; Birbaumer, Niels
2017-12-01
Objective. Two-dimensional movement control is a popular issue in brain-computer interface (BCI) research and has many applications in the real world. In this paper, we introduce a combined control strategy to a binary class-based BCI system that allows the user to move a cursor in a two-dimensional (2D) plane. Users focus on a single moving vector to control 2D movement instead of controlling vertical and horizontal movement separately. Approach. Five participants took part in a fixed-target experiment and random-target experiment to verify the effectiveness of the combination control strategy under the fixed and random routine conditions. Both experiments were performed in a virtual 2D dimensional environment and visual feedback was provided on the screen. Main results. The five participants achieved an average hit rate of 98.9% and 99.4% for the fixed-target experiment and the random-target experiment, respectively. Significance. The results demonstrate that participants could move the cursor in the 2D plane effectively. The proposed control strategy is based only on a basic two-motor imagery BCI, which enables more people to use it in real-life applications.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A cost-benefit analysis of three older adult fall prevention interventions.
Carande-Kulis, Vilma; Stevens, Judy A; Florence, Curtis S; Beattie, Bonita L; Arias, Ileana
2015-02-01
One out of three persons aged 65 and older falls annually and 20% to 30% of falls result in injury. The purpose of this cost-benefit analysis was to identify community-based fall interventions that were feasible, effective, and provided a positive return on investment (ROI). A third-party payer perspective was used to determine the costs and benefits of three effective fall interventions. Intervention effectiveness was based on randomized controlled trial results. National data were used to estimate the average annual benefits from averting the direct medical costs of a fall. The net benefit and ROI were estimated for each of the interventions. For the Otago Exercise Program delivered to persons aged 65 and older, the net benefit was $121.85 per participant and the ROI was 36% for each dollar invested. For Otago delivered to persons aged 80 and older, the net benefit was $429.18 and the ROI was 127%. Tai chi: Moving for Better Balance had a net benefit of $529.86 and an ROI of 509% and Stepping On had a net benefit of $134.37 and an ROI of 64%. All three fall interventions provided positive net benefits. The ROIs showed that the benefits not only covered the implementation costs but also exceeded the expected direct program delivery costs. These results can help health care funders and other community organizations select appropriate and effective fall interventions that also can provide positive returns on investment. Published by Elsevier Ltd.
Population dynamics of the Concho water snake in rivers and reservoirs
Whiting, M.J.; Dixon, J.R.; Greene, B.D.; Mueller, J.M.; Thornton, O.W.; Hatfield, J.S.; Nichols, J.D.; Hines, J.E.
2008-01-01
The Concho Water Snake (Nerodia harteri paucimaculata) is confined to the Concho–Colorado River valley of central Texas, thereby occupying one of the smallest geographic ranges of any North American snake. In 1986, N. h. paucimaculata was designated as a federally threatened species, in large part because of reservoir projects that were perceived to adversely affect the amount of habitat available to the snake. During a ten-year period (1987–1996), we conducted capture–recapture field studies to assess dynamics of five subpopulations of snakes in both natural (river) and man-made (reservoir) habitats. Because of differential sampling of subpopulations, we present separate results for all five subpopulations combined (including large reservoirs) and three of the five subpopulations (excluding large reservoirs). We used multistate capture–recapture models to deal with stochastic transitions between pre-reproductive and reproductive size classes and to allow for the possibility of different survival and capture probabilities for the two classes. We also estimated both the finite rate of increase (λ) for a deterministic, stage-based, female-only matrix model using the average litter size, and the average rate of adult population change, λ ˆ, which describes changes in numbers of adult snakes, using a direct capture–recapture approach to estimation. Average annual adult survival was about 0.23 and similar for males and females. Average annual survival for subadults was about 0.14. The parameter estimates from the stage-based projection matrix analysis all yielded asymptotic values of λ < 1, suggesting populations that are not viable. However, the direct estimates of average adult λ for the three subpopulations excluding major reservoirs were λ ˆ = 1.26, SE ˆ(λ ˆ) = 0.18 and λ ˆ = 0.99, SE ˆ(λ ˆ) = 0.79, based on two different models. Thus, the direct estimation approach did not provide strong evidence of population declines of the riverine subpopulations, but the estimates are characterized by substantial uncertainty.
Concentrations and Potential Health Risks of Metals in Lip Products
Liu, Sa; Rojas-Cheatham, Ann
2013-01-01
Background: Metal content in lip products has been an issue of concern. Objectives: We measured lead and eight other metals in a convenience sample of 32 lip products used by young Asian women in Oakland, California, and assessed potential health risks related to estimated intakes of these metals. Methods: We analyzed lip products by inductively coupled plasma optical emission spectrometry and used previous estimates of lip product usage rates to determine daily oral intakes. We derived acceptable daily intakes (ADIs) based on information used to determine public health goals for exposure, and compared ADIs with estimated intakes to assess potential risks. Results: Most of the tested lip products contained high concentrations of titanium and aluminum. All examined products had detectable manganese. Lead was detected in 24 products (75%), with an average concentration of 0.36 ± 0.39 ppm, including one sample with 1.32 ppm. When used at the estimated average daily rate, estimated intakes were > 20% of ADIs derived for aluminum, cadmium, chromium, and manganese. In addition, average daily use of 10 products tested would result in chromium intake exceeding our estimated ADI for chromium. For high rates of product use (above the 95th percentile), the percentages of samples with estimated metal intakes exceeding ADIs were 3% for aluminum, 68% for chromium, and 22% for manganese. Estimated intakes of lead were < 20% of ADIs for average and high use. Conclusions: Cosmetics safety should be assessed not only by the presence of hazardous contents, but also by comparing estimated exposures with health-based standards. In addition to lead, metals such as aluminum, cadmium, chromium, and manganese require further investigation. PMID:23674482
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
Kovilakam, Mahesh; Mahajan, Salil; Saravanan, R.; ...
2017-09-13
Here, we alleviate the bias in the tropospheric vertical distribution of black carbon aerosols (BC) in the Community Atmosphere Model (CAM4) using the Cloud-Aerosol and Infrared Pathfinder Satellite Observations (CALIPSO)-derived vertical profiles. A suite of sensitivity experiments are conducted with 1x, 5x, and 10x the present-day model estimated BC concentration climatology, with (corrected, CC) and without (uncorrected, UC) CALIPSO-corrected BC vertical distribution. The globally averaged top of the atmosphere radiative flux perturbation of CC experiments is ~8–50% smaller compared to uncorrected (UC) BC experiments largely due to an increase in low-level clouds. The global average surface temperature increases, the globalmore » average precipitation decreases, and the ITCZ moves northward with the increase in BC radiative forcing, irrespective of the vertical distribution of BC. Further, tropical expansion metrics for the poleward extent of the Northern Hemisphere Hadley cell (HC) indicate that simulated HC expansion is not sensitive to existing model biases in BC vertical distribution.« less
Armentrout, G.W.; Larson, L.R.
1984-01-01
Time-of-travel and dispersion measurements made during a dye study November 7-8, 1978, are presented for a reach of the North Platte River from Casper, Wyo., to a bridge 2 miles downstream from below the Dave Johnston Power Plant. Rhodamine WT dye was injected into the river at Casper, and the resultant dye cloud was traced by sampling as it moved downstream. Samples were taken in three equal-flow sections of the river 's lateral transect at three sites, then analyzed in a fluorometer. The flow in the river was 940 cubic feet per second. The data consist of measured stream mileages and time, distance, and concentration graphs of the dye cloud. The peak concentration traveled through the reach in 24 hours, averaging 1.5 miles per hour; the leading edge took about 22 hours, averaging 1.7 miles per hour; and the trailing edge took 35 hours, averaging 1.0 mile per hour. Data from this study were compared with methods for estimating time of travel for a range of stream discharges.
Atmospheric mold spore counts in relation to meteorological parameters
NASA Astrophysics Data System (ADS)
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
Relation between ground water and surface water in Brandywine Creek basin, Pennsylvania
Olmsted, F.H.; Hely, A.G.
1962-01-01
The relation between ground water and surface water was studied in Brandywine Creek basin, an area of 287 square miles in the Piedmont physiographic province in southeastern Pennsylvania. Most of the basin is underlain by crystalline rocks that yield only small to moderate supplies of water to wells, but the creek has an unusually well-sustained base flow. Streamflow records for the Chadds Ford, Pa., gaging station were analyzed; base flow recession curves and hydrographs of base flow were defined for the calendar years 1928-31 and 1952-53. Water budgets calculated for these two periods indicate that about two-thirds of the runoff of Brandywine Creek is base flow--a significantly higher proportion of base flow than in streams draining most other types of consolidated rocks in the region and almost as high as in streams in sandy parts of the Coastal Plain province in New Jersey and Delaware. Ground-water levels in 16 observation wells were compared with the base flow of the creek for 1952-53. The wells are assumed to provide a reasonably good sample of average fluctuations of the water table and its depth below the land surface. Three of the wells having the most suitable records were selected as index wells to use in a more detailed analysis. A direct, linear relation between the monthly average ground-water stage in the index wells and the base flow of the creek in winter months was found. The average ground-water discharge in the basin for 1952-53 was 489 cfs (316 mgd), of which slightly less than one-fourth was estimated to be loss by evapotranspiration. However, the estimated evapotranspiration from ground water, and consequently the estimated total ground-water discharge, may be somewhat high. The average gravity yield (short-term coefficient of storage) of the zone of water-table fluctuation was calculated by two methods. The first method, based on the ratio of change in ground-water storage as calculated from a witner base-flow recession curve is seasonal change in ground-water stage in the observation wells, gave values of about 7 percent using 16 wells) and 7 1/2 percent (using 3 index wells). The second method, in which the change in ground water storage is based on a hypothetical base-flow recession curve (derived from the observed linear relation between ground-water stage in the index wells and base flow), gave a value of about 10 1/2 percent. The most probable value of gravity yield is between 7 1/2 and 10 percent, but this estimate may require modification when more information on the average magnitude of water-table fluctuation and the sources of base flow of the creek become available. Rough estimates were made of the average coefficient of transmissibility of the rocks in the basin by use of the estimated total ground-water discharge for the period 1952-53, approximate values of length of discharge areas, and average water-table gradients adjacent to the discharge areas. The estimated average coefficient of transmissibility for 1952-53 is roughly 1,000 gpd per foot. The transmissibility is variable, decreasing with decreasing ground-water stage. The seeming inconsistency between the small to moderate ground-water yield to wells and the high yield to streams is explained in terms of the deep permeable soils, the relatively high gravity yield of the zone of water-table fluctuation, the steep water-table gradients toward the streams, the relatively low transmissibility of the rocks, and the rapid decreases in gravity yield below the lower limit of water-table fluctuation. It is concluded that no simple relation exists between the amount of natural ground-water discharge in an area and all the proportion of this discharge that can be diverted to wells.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
Isenberg, Sarina R; Lu, Chunhua; McQuade, John; Chan, Kelvin K W; Gill, Natasha; Cardamone, Michael; Torto, Deirdre; Langbaum, Terry; Razzak, Rab; Smith, Thomas J
2017-05-01
Palliative care inpatient units (PCUs) can improve symptoms, family perception of care, and lower per-diem costs compared with usual care. In March 2013, Johns Hopkins Medical Institutions (JHMI) added a PCU to the palliative care (PC) program. We studied the financial impact of the PC program on JHMI from March 2013 to March 2014. This study considered three components of the PC program: PCU, PC consultations, and professional fees. Using 13 months of admissions data, the team calculated the per-day variable cost pre-PCU (ie, in another hospital unit) and after transfer to the PCU. These fees were multiplied by the number of patients transferred to the PCU and by the average length of stay in the PCU. Consultation savings were estimated using established methods. Professional fees assumed a collection rate of 50%. The total positive financial impact of the PC program was $3,488,863.17. There were 153 transfers to the PCU, 60% with cancer, and an average length of stay of 5.11 days. The daily loss pretransfer to the PCU of $1,797.67 was reduced to $1,345.34 in the PCU (-25%). The PCU saved JHMI $353,645.17 in variable costs, or $452.33 per transfer. Cost savings for PC consultations in the hospital, 60% with cancer, were estimated at $2,765,218. $370,000 was collected in professional fees savings. The PCU and PC program had a favorable impact on JHMI while providing expert patient-centered care. As JHMI moves to an accountable care organization model, value-based patient-centered care and increased intensive care unit availability are desirable.
Spatial variability of steady-state infiltration into a two-layer soil system on burned hillslopes
Kinner, D.A.; Moody, J.A.
2010-01-01
Rainfall-runoff simulations were conducted to estimate the characteristics of the steady-state infiltration rate into 1-m2 north- and south-facing hillslope plots burned by a wildfire in October 2003. Soil profiles in the plots consisted of a two-layer system composed of an ash on top of sandy mineral soil. Multiple rainfall rates (18.4-51.2 mm h-1) were used during 14 short-duration (30 min) and 2 long-duration simulations (2-4 h). Steady state was reached in 7-26 min. Observed spatially-averaged steady-state infiltration rates ranged from 18.2 to 23.8 mm h-1 for north-facing and from 17.9 to 36.0 mm h-1 for south-facing plots. Three different theoretical spatial distribution models of steady-state infiltration rate were fit to the measurements of rainfall rate and steady-state discharge to provided estimates of the spatial average (19.2-22.2 mm h-1) and the coefficient of variation (0.11-0.40) of infiltration rates, overland flow contributing area (74-90% of the plot area), and infiltration threshold (19.0-26 mm h-1). Tensiometer measurements indicated a downward moving pressure wave and suggest that infiltration-excess overland flow is the runoff process on these burned hillslope with a two-layer system. Moreover, the results indicate that the ash layer is wettable, may restrict water flow into the underlying layer, and increase the infiltration threshold; whereas, the underlying mineral soil, though coarser, limits the infiltration rate. These results of the spatial variability of steady-state infiltration can be used to develop physically-based rainfall-runoff models for burned areas with a two-layer soil system. ?? 2010 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Meintz, Andrew Lee
This dissertation offers a description of the development of a fuel cell plug-in hybrid electric vehicle focusing on the propulsion architecture selection, propulsion system control, and high-level energy management. Two energy management techniques have been developed and implemented for real-time control of the vehicle. The first method is a heuristic method that relies on a short-term moving average of the vehicle power requirements. The second method utilizes an affine function of the short-term and long-term moving average vehicle power requirements. The development process of these methods has required the creation of a vehicle simulator capable of estimating the effect of changes to the energy management control techniques on the overall vehicle energy efficiency. Furthermore, the simulator has allowed for the refinement of the energy management methods and for the stability of the method to be analyzed prior to on-road testing. This simulator has been verified through on-road testing of a constructed prototype vehicle under both highway and city driving schedules for each energy management method. The results of the finalized vehicle control strategies are compared with the simulator predictions and an assessment of the effectiveness of both strategies is discussed. The methods have been evaluated for energy consumption in the form of both hydrogen fuel and stored electricity from grid charging.
Rich, David Q.; Mittleman, Murray A.; Link, Mark S.; Schwartz, Joel; Luttmann-Gibson, Heike; Catalano, Paul J.; Speizer, Frank E.; Gold, Diane R.; Dockery, Douglas W.
2006-01-01
Objectives: We reported previously that 24-hr moving average ambient air pollution concentrations were positively associated with ventricular arrhythmias detected by implantable cardioverter defibrillators (ICDs). ICDs also detect paroxysmal atrial fibrillation episodes (PAF) that result in rapid ventricular rates. In this same cohort of ICD patients, we assessed the association between ambient air pollution and episodes of PAF. Design: We performed a case–crossover study. Participants: Patients who lived in the Boston, Massachusetts, metropolitan area and who had ICDs implanted between June 1995 and December 1999 (n = 203) were followed until July 2002. Evaluations/Measurements: We used conditional logistic regression to explore the association between community air pollution and 91 electrophysiologist-confirmed episodes of PAF among 29 subjects. Results: We found a statistically significant positive association between episodes of PAF and increased ozone concentration (22 ppb) in the hour before the arrhythmia (odds ratio = 2.08; 95% confidence interval = 1.22, 3.54; p = 0.001). The risk estimate for a longer (24-hr) moving average was smaller, thus suggesting an immediate effect. Positive but not statistically significant risks were associated with fine particles, nitrogen dioxide, and black carbon. Conclusions: Increased ambient O3 pollution was associated with increased risk of episodes of rapid ventricular response due to PAF, thereby suggesting that community air pollution may be a precipitant of these events. PMID:16393668
Online tracking of instantaneous frequency and amplitude of dynamical system response
NASA Astrophysics Data System (ADS)
Frank Pai, P.
2010-05-01
This paper presents a sliding-window tracking (SWT) method for accurate tracking of the instantaneous frequency and amplitude of arbitrary dynamic response by processing only three (or more) most recent data points. Teager-Kaiser algorithm (TKA) is a well-known four-point method for online tracking of frequency and amplitude. Because finite difference is used in TKA, its accuracy is easily destroyed by measurement and/or signal-processing noise. Moreover, because TKA assumes the processed signal to be a pure harmonic, any moving average in the signal can destroy the accuracy of TKA. On the other hand, because SWT uses a constant and a pair of windowed regular harmonics to fit the data and estimate the instantaneous frequency and amplitude, the influence of any moving average is eliminated. Moreover, noise filtering is an implicit capability of SWT when more than three data points are used, and this capability increases with the number of processed data points. To compare the accuracy of SWT and TKA, Hilbert-Huang transform is used to extract accurate time-varying frequencies and amplitudes by processing the whole data set without assuming the signal to be harmonic. Frequency and amplitude trackings of different amplitude- and frequency-modulated signals, vibrato in music, and nonlinear stationary and non-stationary dynamic signals are studied. Results show that SWT is more accurate, robust, and versatile than TKA for online tracking of frequency and amplitude.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
DARK MATTER MASS FRACTION IN LENS GALAXIES: NEW ESTIMATES FROM MICROLENSING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiménez-Vicente, J.; Mediavilla, E.; Kochanek, C. S.
2015-02-01
We present a joint estimate of the stellar/dark matter mass fraction in lens galaxies and the average size of the accretion disk of lensed quasars based on microlensing measurements of 27 quasar image pairs seen through 19 lens galaxies. The Bayesian estimate for the fraction of the surface mass density in the form of stars is α = 0.21 ± 0.14 near the Einstein radius of the lenses (∼1-2 effective radii). The estimate for the average accretion disk size is R{sub 1/2}=7.9{sub −2.6}{sup +3.8}√(M/0.3 M{sub ⊙}) light days. The fraction of mass in stars at these radii is significantly largermore » than previous estimates from microlensing studies assuming quasars were point-like. The corresponding local dark matter fraction of 79% is in good agreement with other estimates based on strong lensing or kinematics. The size of the accretion disk inferred in the present study is slightly larger than previous estimates.« less