Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.
Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo
2016-07-01
The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
Hardware design and implementation of fast DOA estimation method based on multicore DSP
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-10-01
In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.
On estimation of time-dependent attributable fraction from population-based case-control studies.
Zhao, Wei; Chen, Ying Qing; Hsu, Li
2017-09-01
Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
Berke, Ethan M; Shi, Xun
2009-04-29
Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.
A new methodology for estimating nuclear casualties as a function of time.
Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A
2011-09-01
The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
NASA Astrophysics Data System (ADS)
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
Russian national time scale long-term stability
NASA Astrophysics Data System (ADS)
Alshina, A. P.; Gaigerov, B. A.; Koshelyaevsky, N. B.; Pushkin, S. B.
1994-05-01
The Institute of Metrology for Time and Space NPO 'VNIIFTRI' generates the National Time Scale (NTS) of Russia -- one of the most stable time scales in the world. Its striking feature is that it is based on a free ensemble of H-masers only. During last two years the estimations of NTS longterm stability based only on H-maser intercomparison data gives a flicker floor of about (2 to 3) x 10(exp -15) for averaging times from 1 day to 1 month. Perhaps the most significant feature for a time laboratory is an extremely low possible frequency drift -- it is too difficult to estimate it reliably. The other estimations, free from possible inside the ensemble correlation phenomena, are available based on the time comparison of NTS relative to the stable enough time scale of outer laboratories. The data on NTS comparison relative to the time scale of secondary time and frequency standards at Golitzino and Irkutsk in Russia and relative to NIST, PTB and USNO using GLONASS and GPS time transfer links gives stability estimations which are close to that based on H-maser intercomparisons.
Russian national time scale long-term stability
NASA Technical Reports Server (NTRS)
Alshina, A. P.; Gaigerov, B. A.; Koshelyaevsky, N. B.; Pushkin, S. B.
1994-01-01
The Institute of Metrology for Time and Space NPO 'VNIIFTRI' generates the National Time Scale (NTS) of Russia -- one of the most stable time scales in the world. Its striking feature is that it is based on a free ensemble of H-masers only. During last two years the estimations of NTS longterm stability based only on H-maser intercomparison data gives a flicker floor of about (2 to 3) x 10(exp -15) for averaging times from 1 day to 1 month. Perhaps the most significant feature for a time laboratory is an extremely low possible frequency drift -- it is too difficult to estimate it reliably. The other estimations, free from possible inside the ensemble correlation phenomena, are available based on the time comparison of NTS relative to the stable enough time scale of outer laboratories. The data on NTS comparison relative to the time scale of secondary time and frequency standards at Golitzino and Irkutsk in Russia and relative to NIST, PTB and USNO using GLONASS and GPS time transfer links gives stability estimations which are close to that based on H-maser intercomparisons.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Effects of Learned Episodic Event Structure on Prospective Duration Judgments
ERIC Educational Resources Information Center
Faber, Myrthe; Gennari, Silvia P.
2017-01-01
The field of psychology of time has typically distinguished between prospective timing and retrospective duration estimation: in prospective timing, participants attend to and encode time, whereas in retrospective estimation, estimates are based on the memory of what happened. Prior research on prospective timing has primarily focused on…
NASA Astrophysics Data System (ADS)
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
NASA Technical Reports Server (NTRS)
Engelland, Shawn A.; Capps, Alan
2011-01-01
Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation
NASA Astrophysics Data System (ADS)
Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou
2018-06-01
Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Takeuchi, Yoshinori; Shinozaki, Tomohiro; Matsuyama, Yutaka
2018-01-08
Despite the frequent use of self-controlled methods in pharmacoepidemiological studies, the factors that may bias the estimates from these methods have not been adequately compared in real-world settings. Here, we comparatively examined the impact of a time-varying confounder and its interactions with time-invariant confounders, time trends in exposures and events, restrictions, and misspecification of risk period durations on the estimators from three self-controlled methods. This study analyzed self-controlled case series (SCCS), case-crossover (CCO) design, and sequence symmetry analysis (SSA) using simulated and actual electronic medical records datasets. We evaluated the performance of the three self-controlled methods in simulated cohorts for the following scenarios: 1) time-invariant confounding with interactions between the confounders, 2) time-invariant and time-varying confounding without interactions, 3) time-invariant and time-varying confounding with interactions among the confounders, 4) time trends in exposures and events, 5) restricted follow-up time based on event occurrence, and 6) patient restriction based on event history. The sensitivity of the estimators to misspecified risk period durations was also evaluated. As a case study, we applied these methods to evaluate the risk of macrolides on liver injury using electronic medical records. In the simulation analysis, time-varying confounding produced bias in the SCCS and CCO design estimates, which aggravated in the presence of interactions between the time-invariant and time-varying confounders. The SCCS estimates were biased by time trends in both exposures and events. Erroneously short risk periods introduced bias to the CCO design estimate, whereas erroneously long risk periods introduced bias to the estimates of all three methods. Restricting the follow-up time led to severe bias in the SSA estimates. The SCCS estimates were sensitive to patient restriction. The case study showed that although macrolide use was significantly associated with increased liver injury occurrence in all methods, the value of the estimates varied. The estimations of the three self-controlled methods depended on various underlying assumptions, and the violation of these assumptions may cause non-negligible bias in the resulting estimates. Pharmacoepidemiologists should select the appropriate self-controlled method based on how well the relevant key assumptions are satisfied with respect to the available data.
Vandergoot, C.S.; Bur, M.T.; Powell, K.A.
2008-01-01
Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.
Algorithms for Brownian first-passage-time estimation
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2009-09-01
A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.
ERIC Educational Resources Information Center
Kan, Man Yee
2008-01-01
This article compares stylised (questionnaire-based) estimates and diary-based estimates of housework time collected from the same respondents. Data come from the Home On-line Study (1999-2001), a British national household survey that contains both types of estimates (sample size = 632 men and 666 women). It shows that the gap between the two…
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Beckmann, Kerri; Duffy, Stephen W; Lynch, John; Hiller, Janet; Farshid, Gelareh; Roder, David
2015-09-01
To estimate over-diagnosis due to population-based mammography screening using a lead time adjustment approach, with lead time measures based on symptomatic cancers only. Women aged 40-84 in 1989-2009 in South Australia eligible for mammography screening. Numbers of observed and expected breast cancer cases were compared, after adjustment for lead time. Lead time effects were modelled using age-specific estimates of lead time (derived from interval cancer rates and predicted background incidence, using maximum likelihood methods) and screening sensitivity, projected background breast cancer incidence rates (in the absence of screening), and proportions screened, by age and calendar year. Lead time estimates were 12, 26, 43 and 53 months, for women aged 40-49, 50-59, 60-69 and 70-79 respectively. Background incidence rates were estimated to have increased by 0.9% and 1.2% per year for invasive and all breast cancer. Over-diagnosis among women aged 40-84 was estimated at 7.9% (0.1-12.0%) for invasive cases and 12.0% (5.7-15.4%) when including ductal carcinoma in-situ (DCIS). We estimated 8% over-diagnosis for invasive breast cancer and 12% inclusive of DCIS cancers due to mammography screening among women aged 40-84. These estimates may overstate the extent of over-diagnosis if the increasing prevalence of breast cancer risk factors has led to higher background incidence than projected. © The Author(s) 2015.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher
2013-07-01
Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21 intellectually high-functioning children with ASD, and 21 age- and IQ-matched neurotypical comparison children. We found impaired time-based, but undiminished event-based, prospective memory among children with ASD. In the ASD group, time-based prospective memory performance was associated significantly with diminished theory of mind, but not with diminished cognitive flexibility. There was no evidence that time-estimation ability contributed to time-based prospective memory impairment in ASD.
An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat
Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.
2016-01-01
Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Strategies for Near Real Time Estimates of Precipitable Water Vapor from GPS Ground Receivers
NASA Technical Reports Server (NTRS)
Y., Bar-Sever; Runge, T.; Kroger, P.
1995-01-01
GPS-based estimates of precipitable water vapor (PWV) may be useful in numerical weather models to improve short-term weather predictions. To be effective in numerical weather prediction models, GPS PWV estimates must be produced with sufficient accuracy in near real time. Several estimation strategies for the near real time processing of GPS data are investigated.
Spectrum-based estimators of the bivariate Hurst exponent
NASA Astrophysics Data System (ADS)
Kristoufek, Ladislav
2014-12-01
We discuss two alternate spectrum-based estimators of the bivariate Hurst exponent in the power-law cross-correlations setting, the cross-periodogram and local X -Whittle estimators, as generalizations of their univariate counterparts. As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. These estimators are less biased than the already existent averaged periodogram estimator, which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Temporal validation for landsat-based volume estimation model
Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan
2015-01-01
Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...
Influence of hypo- and hyperthermia on death time estimation - A simulation study.
Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G
2017-09-01
Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.
Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi
2011-08-01
Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Kalman Filters for Time Delay of Arrival-Based Source Localization
NASA Astrophysics Data System (ADS)
Klee, Ulrich; Gehrig, Tobias; McDonough, John
2006-12-01
In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.
2011-01-01
Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html. PMID:21851598
Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp
2011-08-18
Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Dick, Galina; Heise, Stefan; Balidakis, Kyriakos; Schmidt, Torsten; Wickert, Jens
2017-04-01
Ground-based GNSS (Global Navigation Satellite Systems) have efficiently been used since the 1990s as a meteorological observing system. Recently scientists used GNSS time series of precipitable water vapor (PWV) for climate research although they may not be sufficiently long. In this work, we compare the trend estimated from GNSS time series with that estimated from European Center for Medium-RangeWeather Forecasts Reanalysis (ERA-Interim) data and meteorological measurements.We aim at evaluating climate evolution in Central Europe by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: 1) estimated from ground-based GNSS observations using the method of precise point positioning, 2) inferred from ERA-Interim data, and 3) determined based on daily surface measurements of temperature and relative humidity. The other variables are available from surface meteorological stations or received from ERA-Interim. The PWV trend component estimated from GNSS data strongly correlates (>70%) with that estimated from the other data sets. The linear trend is estimated by straight line fitting over 30 years of seasonally-adjusted PWV time series obtained using the meteorological measurements. The results show a positive trend in the PWV time series with an increase of 0.2-0.7 mm/decade with a mean standard deviations of 0.016 mm/decade. In this paper, we present the results at three GNSS stations. The temporal increment of the PWV correlates with the temporal increase in the temperature levels.
Bio-inspired vision based robot control using featureless estimations of time-to-contact.
Zhang, Haijie; Zhao, Jianguo
2017-01-31
Marvelous vision based dynamic behaviors of insects and birds such as perching, landing, and obstacle avoidance have inspired scientists to propose the idea of time-to-contact, which is defined as the time for a moving observer to contact an object or surface if the current velocity is maintained. Since with only a vision sensor, time-to-contact can be directly estimated from consecutive images, it is widely used for a variety of robots to fulfill various tasks such as obstacle avoidance, docking, chasing, perching and landing. However, most of existing methods to estimate the time-to-contact need to extract and track features during the control process, which is time-consuming and cannot be applied to robots with limited computation power. In this paper, we adopt a featureless estimation method, extend this method to more general settings with angular velocities, and improve the estimation results using Kalman filtering. Further, we design an error based controller with gain scheduling strategy to control the motion of mobile robots. Experiments for both estimation and control are conducted using a customized mobile robot platform with low-cost embedded systems. Onboard experimental results demonstrate the effectiveness of the proposed approach, with the robot being controlled to successfully dock in front of a vertical wall. The estimation and control methods presented in this paper can be applied to computation-constrained miniature robots for agile locomotion such as landing, docking, or navigation.
Statistical properties of Fourier-based time-lag estimates
NASA Astrophysics Data System (ADS)
Epitropakis, A.; Papadakis, I. E.
2016-06-01
Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Method for determining waveguide temperature for acoustic transceiver used in a gas turbine engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeSilva, Upul P.; Claussen, Heiko; Ragunathan, Karthik
A method for determining waveguide temperature for at least one waveguide of a transceiver utilized for generating a temperature map. The transceiver generates an acoustic signal that travels through a measurement space in a hot gas flow path defined by a wall such as in a combustor. The method includes calculating a total time of flight for the acoustic signal and subtracting a waveguide travel time from the total time of flight to obtain a measurement space travel time. A temperature map is calculated based on the measurement space travel time. An estimated wall temperature is obtained from the temperaturemore » map. An estimated waveguide temperature is then calculated based on the estimated wall temperature wherein the estimated waveguide temperature is determined without the use of a temperature sensing device.« less
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Beyond Newton's law of cooling - estimation of time since death
NASA Astrophysics Data System (ADS)
Leinbach, Carl
2011-09-01
The estimate of the time since death and, thus, the time of death is strictly that, an estimate. However, the time of death can be an important piece of information in some coroner's cases, especially those that involve criminal or insurance investigations. It has been known almost from the beginning of time that bodies cool after the internal mechanisms such as circulation of the blood stop. A first attempt to link this phenomenon to the determination of the time of death used a crude linear relationship. Towards the end of the nineteenth century, Newton's law of cooling using body temperature data obtained by the coroner was used to make a more accurate estimate. While based on scientific principles and resulting in a better estimate, Newton's law does not really describe the cooling of a non-homogeneous human body. This article will discuss a more accurate model of the cooling process based on the theoretical work of Marshall and Hoare and the laboratory-based statistical work of Claus Henssge. Using DERIVE®6.10 and the statistical work of Henssge, the double exponential cooling formula developed by Marshall and Hoare will be explored. The end result is a tool that can be used in the field by coroner's scene investigators to determine a 95% confidence interval for the time since death and, thus, the time of death.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Time cost of child rearing and its effect on women's uptake of free health checkups in Japan.
Anezaki, Hisataka; Hashimoto, Hideki
2018-05-01
Women of child-rearing age have the lowest uptake rates for health checkups in several developed countries. The time cost incurred by conflicting child-rearing roles may contribute to this gap in access to health checkups. We estimated the time cost of child rearing empirically, and analyzed its potential impact on uptake of free health checkups based on a sample of 1606 women with a spouse/partner from the dataset of a population-based survey conducted in the greater Tokyo metropolitan area in 2010. We used a selection model to estimate the counterfactual wage of non-working mothers, and estimated the number of children using a simultaneous equation model to account for the endogeneity between job participation and child rearing. The time cost of child rearing was obtained based on the estimated effects of women's wages and number of children on job participation. We estimated the time cost to mothers of rearing a child aged 0-3 years as 16.9 USD per hour, and the cost for a child aged 4-5 years as 15.0 USD per hour. Based on this estimation, the predicted uptake rate of women who did not have a child was 61.7%, while the predicted uptake rates for women with a child aged 0-3 and 4-5 were 54.2% and 58.6%, respectively. These results suggest that, although Japanese central/local governments provide free health checkup services, this policy does not fully compensate for the time cost of child rearing. It is strongly recommended that policies should be developed to address the time cost of child rearing, with the aim of closing the gender gap and securing universal access to preventive healthcare services in Japan. Copyright © 2018. Published by Elsevier Ltd.
Li, Zhenghan; Li, Xinyang
2018-04-30
Real time transverse wind estimation contributes to predictive correction which is used to compensate for the time delay error in the control systems of adaptive optics (AO) system. Many methods that apply Shack-Hartmann wave-front sensor to wind profile measurement have been proposed. One of the obvious problems is the lack of a fundamental benchmark to compare the various methods. In this work, we present the fundamental performance limits for transverse wind estimator from Shack-Hartmann wave-front sensor measurements using Cramér-Rao lower bound (CRLB). The bound provides insight into the nature of the transverse wind estimation, thereby suggesting how to design and improve the estimator in the different application scenario. We analyze the theoretical bound and find that factors such as slope measurement noise, wind velocity and atmospheric coherence length r 0 have important influence on the performance. Then, we introduced the non-iterative gradient-based transverse wind estimator. The source of the deterministic bias of the gradient-based transverse wind estimators is analyzed for the first time. Finally, we derived biased CRLB for the gradient-based transverse wind estimators from Shack-Hartmann wave-front sensor measurements and the bound can predict the performance of estimator more accurately.
A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.
Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang
2015-11-13
Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2018-05-01
The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.
Borque, Paloma; Luke, Edward; Kollias, Pavlos
2016-05-27
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borque, Paloma; Luke, Edward; Kollias, Pavlos
Coincident profiling observations from Doppler lidars and radars are used to estimate the turbulence energy dissipation rate (ε) using three different data sources: (i) Doppler radar velocity (DRV), (ii) Doppler lidar velocity (DLV), and (iii) Doppler radar spectrum width (DRW) measurements. Likewise, the agreement between the derived ε estimates is examined at the cloud base height of stratiform warm clouds. Collocated ε estimates based on power spectra analysis of DRV and DLV measurements show good agreement (correlation coefficient of 0.86 and 0.78 for both cases analyzed here) during both drizzling and nondrizzling conditions. This suggests that unified (below and abovemore » cloud base) time-height estimates of ε in cloud-topped boundary layer conditions can be produced. This also suggests that eddy dissipation rate can be estimated throughout the cloud layer without the constraint that clouds need to be nonprecipitating. Eddy dissipation rate estimates based on DRW measurements compare well with the estimates based on Doppler velocity but their performance deteriorates as precipitation size particles are introduced in the radar volume and broaden the DRW values. And, based on this finding, a methodology to estimate the Doppler spectra broadening due to the spread of the drop size distribution is presented. Furthermore, the uncertainties in ε introduced by signal-to-noise conditions, the estimation of the horizontal wind, the selection of the averaging time window, and the presence of precipitation are discussed in detail.« less
NASA Astrophysics Data System (ADS)
Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi
2018-02-01
A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Services (Based on NASA Escalation Estimate) Time: Project conceptualization (at least two years before... TDRSS Standard Services (Based on NASA Escalation Estimate) A Appendix A to Part 1215 Aeronautics and... the service requirements by NASA Headquarters, communications for the reimbursable development of a...
NASA Astrophysics Data System (ADS)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
An evaluation of study design for estimating a time-of-day noise weighting
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.
Estimating Real-Time Zenith Tropospheric Delay over Africa Using IGS-RTS Products
NASA Astrophysics Data System (ADS)
Abdelazeem, M.
2017-12-01
Zenith Tropospheric Delay (ZTD) is a crucial parameter for atmospheric modeling, severe weather monitoring and forecasting applications. Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively in real-time atmospheric modeling applications. The objective of this study is to develop a real time zenith tropospheric delay estimation model over Africa using the IGS-RTS products. The real-time ZTDs are estimated based on the real-time precise point positioning (PPP) solution. GNSS observations from a number of reference stations are processed over a period of 7 days. Then, the estimated real-time ZTDs are compared with the IGS tropospheric products counterparts. The findings indicate that the estimated real-time ZTDs have millimeter level accuracy in comparison with the IGS counterparts.
NASA Astrophysics Data System (ADS)
Zapata, D.; Salazar, M.; Chaves, B.; Keller, M.; Hoogenboom, G.
2015-12-01
Thermal time models have been used to predict the development of many different species, including grapevine ( Vitis vinifera L.). These models normally assume that there is a linear relationship between temperature and plant development. The goal of this study was to estimate the base temperature and duration in terms of thermal time for predicting veraison for four grapevine cultivars. Historical phenological data for four cultivars that were collected in the Pacific Northwest were used to develop the thermal time model. Base temperatures ( T b) of 0 and 10 °C and the best estimated T b using three different methods were evaluated for predicting veraison in grapevine. Thermal time requirements for each individual cultivar were evaluated through analysis of variance, and means were compared using the Fisher's test. The methods that were applied to estimate T b for the development of wine grapes included the least standard deviation in heat units, the regression coefficient, and the development rate method. The estimated T b varied among methods and cultivars. The development rate method provided the lowest T b values for all cultivars. For the three methods, Chardonnay had the lowest T b ranging from 8.7 to 10.7 °C, while the highest T b values were obtained for Riesling and Cabernet Sauvignon with 11.8 and 12.8 °C, respectively. Thermal time also differed among cultivars, when either the fixed or estimated T b was used. Predictions of the beginning of ripening with the estimated temperature resulted in the lowest variation in real days when compared with predictions using T b = 0 or 10 °C, regardless of the method that was used to estimate the T b.
Adaptive tracking of a time-varying field with a quantum sensor
NASA Astrophysics Data System (ADS)
Bonato, Cristian; Berry, Dominic W.
2017-05-01
Sensors based on single spins can enable magnetic-field detection with very high sensitivity and spatial resolution. Previous work has concentrated on sensing of a constant magnetic field or a periodic signal. Here, we instead investigate the problem of estimating a field with nonperiodic variation described by a Wiener process. We propose and study, by numerical simulations, an adaptive tracking protocol based on Bayesian estimation. The tracking protocol updates the probability distribution for the magnetic field based on measurement outcomes and adapts the choice of sensing time and phase in real time. By taking the statistical properties of the signal into account, our protocol strongly reduces the required measurement time. This leads to a reduction of the error in the estimation of a time-varying signal by up to a factor of four compare with protocols that do not take this information into account.
A time and frequency synchronization method for CO-OFDM based on CMA equalizers
NASA Astrophysics Data System (ADS)
Ren, Kaixuan; Li, Xiang; Huang, Tianye; Cheng, Zhuo; Chen, Bingwei; Wu, Xu; Fu, Songnian; Ping, Perry Shum
2018-06-01
In this paper, an efficient time and frequency synchronization method based on a new training symbol structure is proposed for polarization division multiplexing (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The coarse timing synchronization is achieved by exploiting the correlation property of the first training symbol, and the fine timing synchronization is accomplished by using the time-domain symmetric conjugate of the second training symbol. Furthermore, based on these training symbols, a constant modulus algorithm (CMA) is proposed for carrier frequency offset (CFO) estimation. Theoretical analysis and simulation results indicate that the algorithm has the advantages of robustness to poor optical signal-to-noise ratio (OSNR) and chromatic dispersion (CD). The frequency offset estimation range can achieve [ -Nsc/2 ΔfN , + Nsc/2 ΔfN ] GHz with the mean normalized estimation error below 12 × 10-3 even under the condition of OSNR as low as 10 dB.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
76 FR 770 - Proposed Information Collection; Comment Request; Monthly Wholesale Trade Survey
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-06
... reduces the time and cost of preparing mailout packages that contain unique variable data, while improving... developing productivity measurements. Estimates produced from the MWTS are based on a probability sample and..., excluding manufacturers' sales branches and offices. Estimated Number of Respondents: 4,500. Estimated Time...
Estimating trends in atmospheric water vapor and temperature time series over Germany
NASA Astrophysics Data System (ADS)
Alshawaf, Fadwa; Balidakis, Kyriakos; Dick, Galina; Heise, Stefan; Wickert, Jens
2017-08-01
Ground-based GNSS (Global Navigation Satellite System) has efficiently been used since the 1990s as a meteorological observing system. Recently scientists have used GNSS time series of precipitable water vapor (PWV) for climate research. In this work, we compare the temporal trends estimated from GNSS time series with those estimated from European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) data and meteorological measurements. We aim to evaluate climate evolution in Germany by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: (1) estimated from ground-based GNSS observations using the method of precise point positioning, (2) inferred from ERA-Interim reanalysis data, and (3) determined based on daily in situ measurements of temperature and relative humidity. The other relevant atmospheric parameters are available from surface measurements of meteorological stations or derived from ERA-Interim. The trends are estimated using two methods: the first applies least squares to deseasonalized time series and the second uses the Theil-Sen estimator. The trends estimated at 113 GNSS sites, with 10 to 19 years temporal coverage, vary between -1.5 and 2.3 mm decade-1 with standard deviations below 0.25 mm decade-1. These results were validated by estimating the trends from ERA-Interim data over the same time windows, which show similar values. These values of the trend depend on the length and the variations of the time series. Therefore, to give a mean value of the PWV trend over Germany, we estimated the trends using ERA-Interim spanning from 1991 to 2016 (26 years) at 227 synoptic stations over Germany. The ERA-Interim data show positive PWV trends of 0.33 ± 0.06 mm decade-1 with standard errors below 0.03 mm decade-1. The increment in PWV varies between 4.5 and 6.5 % per degree Celsius rise in temperature, which is comparable to the theoretical rate of the Clausius-Clapeyron equation.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
BME Estimation of Residential Exposure to Ambient PM10 and Ozone at Multiple Time Scales
Yu, Hwa-Lung; Chen, Jiu-Chiuan; Christakos, George; Jerrett, Michael
2009-01-01
Background Long-term human exposure to ambient pollutants can be an important contributing or etiologic factor of many chronic diseases. Spatiotemporal estimation (mapping) of long-term exposure at residential areas based on field observations recorded in the U.S. Environmental Protection Agency’s Air Quality System often suffer from missing data issues due to the scarce monitoring network across space and the inconsistent recording periods at different monitors. Objective We developed and compared two upscaling methods: UM1 (data aggregation followed by exposure estimation) and UM2 (exposure estimation followed by data aggregation) for the long-term PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) and ozone exposure estimations and applied them in multiple time scales to estimate PM and ozone exposures for the residential areas of the Health Effects of Air Pollution on Lupus (HEAPL) study. Method We used Bayesian maximum entropy (BME) analysis for the two upscaling methods. We performed spatiotemporal cross-validations at multiple time scales by UM1 and UM2 to assess the estimation accuracy across space and time. Results Compared with the kriging method, the integration of soft information by the BME method can effectively increase the estimation accuracy for both pollutants. The spatiotemporal distributions of estimation errors from UM1 and UM2 were similar. The cross-validation results indicated that UM2 is generally better than UM1 in exposure estimations at multiple time scales in terms of predictive accuracy and lack of bias. For yearly PM10 estimations, both approaches have comparable performance, but the implementation of UM1 is associated with much lower computation burden. Conclusion BME-based upscaling methods UM1 and UM2 can assimilate core and site-specific knowledge bases of different formats for long-term exposure estimation. This study shows that UM1 can perform reasonably well when the aggregation process does not alter the spatiotemporal structure of the original data set; otherwise, UM2 is preferable. PMID:19440491
Khor, Y H; Tolson, J; Churchward, T; Rochford, P; Worsnop, C
2015-08-01
Home polysomnography (PSG) is an alternative method for diagnosis of obstructive sleep apnoea (OSA). Some types 3 and 4 PSG do not monitor sleep and so rely on patients' estimation of total sleep time (TST). To compare patients' subjective sleep duration estimation with objective measures in patients who underwent type 2 PSG for probable OSA. A prospective clinical audit of 536 consecutive patients of one of the authors between 2006 and 2013. A standard questionnaire was completed by the patients the morning after the home PSG to record the time of lights being turned off and estimated time of sleep onset and offset. PSG was scored based on the guidelines of the American Academy of Sleep Medicine. Median estimated sleep latency (SL) was 20 min compared with 10 min for measured SL (P < 0.0001). There was also a significant difference between the estimated and measured sleep offset time (median difference = -1 min, P = 0.01). Estimated TST was significantly shorter than the measured TST (median difference = -18.5 min, P = 0.002). No factors have been identified to affect patients' accuracy of sleep perception. Only 2% of patients had a change in their diagnosis of OSA based on calculated apnoea-hypopnoea index. Overall estimated TST in the patients with probable OSA was significantly shorter than measured with significant individual variability. Collectively, inaccurate sleep time estimation had not resulted in significant difference in the diagnosis of OSA. © 2015 Royal Australasian College of Physicians.
Measuring survival time: a probability-based approach useful in healthcare decision-making.
2011-01-01
In some clinical situations, the choice between treatment options takes into account their impact on patient survival time. Due to practical constraints (such as loss to follow-up), survival time is usually estimated using a probability calculation based on data obtained in clinical studies or trials. The two techniques most commonly used to estimate survival times are the Kaplan-Meier method and the actuarial method. Despite their limitations, they provide useful information when choosing between treatment options.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
Real-time state estimation in a flight simulator using fNIRS.
Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic
2015-01-01
Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot's instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot's mental state matched significantly better than chance with the pilot's real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development.
A random utility based estimation framework for the household activity pattern problem.
DOT National Transportation Integrated Search
2016-06-01
This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...
A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems
NASA Astrophysics Data System (ADS)
Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron
2017-12-01
This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data
Kan, Zihan; Zhang, Xia
2018-01-01
The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles’ mobile activities (MA) and stationary activities (SA). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA, SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles’ activities in road networks. PMID:29561813
Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data.
Kan, Zihan; Tang, Luliang; Kwan, Mei-Po; Zhang, Xia
2018-03-21
The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles' mobile activities ( MA ) and stationary activities ( SA ). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA , SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles' activities in road networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, H. M. Abdul; Ukkusuri, Satish V.
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
Aziz, H. M. Abdul; Ukkusuri, Satish V.
2017-06-29
We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less
NASA Astrophysics Data System (ADS)
Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim
2017-03-01
The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.
Morales, Rafael; Rincón, Fernando; Gazzano, Julio Dondo; López, Juan Carlos
2014-01-01
Time derivative estimation of signals plays a very important role in several fields, such as signal processing and control engineering, just to name a few of them. For that purpose, a non-asymptotic algebraic procedure for the approximate estimation of the system states is used in this work. The method is based on results from differential algebra and furnishes some general formulae for the time derivatives of a measurable signal in which two algebraic derivative estimators run simultaneously, but in an overlapping fashion. The algebraic derivative algorithm presented in this paper is computed online and in real-time, offering high robustness properties with regard to corrupting noises, versatility and ease of implementation. Besides, in this work, we introduce a novel architecture to accelerate this algebraic derivative estimator using reconfigurable logic. The core of the algorithm is implemented in an FPGA, improving the speed of the system and achieving real-time performance. Finally, this work proposes a low-cost platform for the integration of hardware in the loop in MATLAB. PMID:24859033
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.
2013-01-01
A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Trend Change Detection in NDVI Time Series: Effects of Inter-Annual Variability and Methodology
NASA Technical Reports Server (NTRS)
Forkel, Matthias; Carvalhais, Nuno; Verbesselt, Jan; Mahecha, Miguel D.; Neigh, Christopher S.R.; Reichstein, Markus
2013-01-01
Changing trends in ecosystem productivity can be quantified using satellite observations of Normalized Difference Vegetation Index (NDVI). However, the estimation of trends from NDVI time series differs substantially depending on analyzed satellite dataset, the corresponding spatiotemporal resolution, and the applied statistical method. Here we compare the performance of a wide range of trend estimation methods and demonstrate that performance decreases with increasing inter-annual variability in the NDVI time series. Trend slope estimates based on annual aggregated time series or based on a seasonal-trend model show better performances than methods that remove the seasonal cycle of the time series. A breakpoint detection analysis reveals that an overestimation of breakpoints in NDVI trends can result in wrong or even opposite trend estimates. Based on our results, we give practical recommendations for the application of trend methods on long-term NDVI time series. Particularly, we apply and compare different methods on NDVI time series in Alaska, where both greening and browning trends have been previously observed. Here, the multi-method uncertainty of NDVI trends is quantified through the application of the different trend estimation methods. Our results indicate that greening NDVI trends in Alaska are more spatially and temporally prevalent than browning trends. We also show that detected breakpoints in NDVI trends tend to coincide with large fires. Overall, our analyses demonstrate that seasonal trend methods need to be improved against inter-annual variability to quantify changing trends in ecosystem productivity with higher accuracy.
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
NASA Technical Reports Server (NTRS)
Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.
1991-01-01
The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.
Pivette, M; Auvigne, V; Guérin, P; Mueller, J E
2017-04-01
The aim of this study was to describe a tool based on vaccine sales to estimate vaccination coverage against seasonal influenza in near real-time in the French population aged 65 and over. Vaccine sales data available on sale-day +1 came from a stratified sample of 3004 pharmacies in metropolitan France. Vaccination coverage rates were estimated between 2009 and 2014 and compared with those obtained based on vaccination refund data from the general health insurance scheme. The seasonal vaccination coverage estimates were highly correlated with those obtained from refund data. They were also slightly higher, which can be explained by the inclusion of non-reimbursed vaccines and the consideration of all individuals aged 65 and over. We have developed an online tool that provides estimates of daily vaccination coverage during each vaccination campaign. The developed tool provides a reliable and near real-time estimation of vaccination coverage among people aged 65 and over. It can be used to evaluate and adjust public health messages. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Aging persons' estimates of vehicular motion.
Schiff, W; Oldak, R; Shah, V
1992-12-01
Estimated arrival times of moving autos were examined in relation to viewer age, gender, motion trajectory, and velocity. Direct push-button judgments were compared with verbal estimates derived from velocity and distance, which were based on assumptions that perceivers compute arrival time from perceived distance and velocity. Experiment 1 showed that direct estimates of younger Ss were most accurate. Older women made the shortest (highly cautious) estimates of when cars would arrive. Verbal estimates were much lower than direct estimates, with little correlation between them. Experiment 2 extended target distances and velocities of targets, with the results replicating the main findings of Experiment 1. Judgment accuracy increased with target velocity, and verbal estimates were again poorer estimates of arrival time than direct ones, with different patterns of findings. Using verbal estimates to approximate judgments in traffic situations appears questionable.
Cumulus cloud model estimates of trace gas transports
NASA Technical Reports Server (NTRS)
Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.
1989-01-01
Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.
Real-Time State Estimation in a Flight Simulator Using fNIRS
Gateau, Thibault; Durantin, Gautier; Lancelot, Francois; Scannella, Sebastien; Dehais, Frederic
2015-01-01
Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development. PMID:25816347
NASA Astrophysics Data System (ADS)
Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.
2016-07-01
This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.
Estimating Development Cost of an Interactive Website Based Cancer Screening Promotion Program
Lairson, David R.; Chung, Tong Han; Smith, Lisa G.; Springston, Jeffrey K.; Champion, Victoria L.
2015-01-01
Objectives The aim of this study was to estimate the initial development costs for an innovative talk show format tailored intervention delivered via the interactive web, for increasing cancer screening in women 50 to 75 who were non-adherent to screening guidelines for colorectal cancer and/or breast cancer. Methods The cost of the intervention development was estimated from a societal perspective. Micro costing methods plus vendor contract costs were used to estimate cost. Staff logs were used to track personnel time. Non-personnel costs include all additional resources used to produce the intervention. Results Development cost of the interactive web based intervention was $.39 million, of which 77% was direct cost. About 98% of the cost was incurred in personnel time cost, contract cost and overhead cost. Conclusions The new web-based disease prevention medium required substantial investment in health promotion and media specialist time. The development cost was primarily driven by the high level of human capital required. The cost of intervention development is important information for assessing and planning future public and private investments in web-based health promotion interventions. PMID:25749548
Cox, S.E.
2003-01-01
Estimates of residence time of ground water beneath Submarine Base Bangor and vicinity ranged from less than 50 to 4,550 years before present, based on analysis of the environmental tracers tritium, chlorofluorocarbons (CFCs), and carbon-14 (14C), in 33 ground-water samples collected from wells tapping the ground-water system. The concentrations of multiple environmental tracers tritium, CFCs, and 14C were used to classify ground water as modern (recharged after 1953), pre-modern (recharged prior to 1953), or indeterminate. Estimates of the residence time of pre-modern ground water were based on evaluation of 14C of dissolved inorganic carbon present in ground water using geochemical mass-transfer modeling to account for the interactions of the carbon in ground water with carbon of the aquifer sediments. Ground-water samples were obtained from two extensive aquifers and from permeable interbeds within the thick confining unit separating the sampled aquifers. Estimates of ground-water residence time for all ground-water samples from the shallow aquifer were less than 45 years and were classified as modern. Estimates of the residence time of ground water in the permeable interbeds within the confining unit ranged from modern to 4,200 years and varied spatially. Near the recharge area, residence times in the permeable interbeds typically were less than 800 years, whereas near the discharge area residence times were in excess of several thousand years. In the deeper aquifers, estimates of ground-water residence times typically were several thousand years but ranged from modern to 4,550 years. These estimates of ground-water residence time based on 14C were often larger than estimates of ground-water residence time developed by particle-tracking analysis using a ground-water flow model. There were large uncertainties?on the order of 1,000-2,000 years?in the estimates based on 14C. Modern ground-water tracers found in some samples from large-capacity production wells screened in the deeper aquifer may be the result of preferential ground-water pathways or induced downward flow caused by pumping stress. Spatial variations in water quality were used to develop a conceptual model of chemical evolution of ground water. Stable isotope ratios of deuterium and oxygen-18 in the 33 ground-water samples were similar, indicating similar climatic conditions and source of precipitation recharge for all of the sampled ground water. Oxidation of organic matter and mineral dissolution increased the concentrations of dissolved inorganic carbon and common ions in downgradient ground waters. However, the largest concentrations were not found near areas of ground-water discharge, but at intermediate locations where organic carbon concentrations were greatest. Dissolved methane, derived from microbial methanogenesis, was present in some ground waters. Methanogenesis resulted in substantial alteration of the carbon isotopic composition of ground water. The NETPATH geochemical model code was used to model mass-transfers of carbon affecting the 14C estimate of ground-water residence time. Carbon sources in ground water include dispersed particulate organic matter present in the confining unit separating the two aquifers and methane present in some ground water. Carbonate minerals were not observed in the lithologic material of the ground-water system but may be present, because they have been found in the bedrock of stream drainages that contribute sediment to the study area.
Lech, Karolina; Liu, Fan; Ackermann, Katrin; Revell, Victoria L; Lao, Oscar; Skene, Debra J; Kayser, Manfred
2016-03-01
Determining the time a biological trace was left at a scene of crime reflects a crucial aspect of forensic investigations as - if possible - it would permit testing the sample donor's alibi directly from the trace evidence, helping to link (or not) the DNA-identified sample donor with the crime event. However, reliable and robust methodology is lacking thus far. In this study, we assessed the suitability of mRNA for the purpose of estimating blood deposition time, and its added value relative to melatonin and cortisol, two circadian hormones we previously introduced for this purpose. By analysing 21 candidate mRNA markers in blood samples from 12 individuals collected around the clock at 2h intervals for 36h under real-life, controlled conditions, we identified 11 mRNAs with statistically significant expression rhythms. We then used these 11 significantly rhythmic mRNA markers, with and without melatonin and cortisol also analysed in these samples, to establish statistical models for predicting day/night time categories. We found that although in general mRNA-based estimation of time categories was less accurate than hormone-based estimation, the use of three mRNA markers HSPA1B, MKNK2 and PER3 together with melatonin and cortisol generally enhanced the time prediction accuracy relative to the use of the two hormones alone. Our data best support a model that by using these five molecular biomarkers estimates three time categories, i.e. night/early morning, morning/noon, and afternoon/evening with prediction accuracies expressed as AUC values of 0.88, 0.88, and 0.95, respectively. For the first time, we demonstrate the value of mRNA for blood deposition timing and introduce a statistical model for estimating day/night time categories based on molecular biomarkers, which shall be further validated with additional samples in the future. Moreover, our work provides new leads for molecular approaches on time of death estimation using the significantly rhythmic mRNA markers established here. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Using Empirical Mode Decomposition to process Marine Magnetotelluric Data
NASA Astrophysics Data System (ADS)
Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.
2014-12-01
The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to greater depth underneath the seafloor.
Estimation and enhancement of real-time software reliability through mutation analysis
NASA Technical Reports Server (NTRS)
Geist, Robert; Offutt, A. J.; Harris, Frederick C., Jr.
1992-01-01
A simulation-based technique for obtaining numerical estimates of the reliability of N-version, real-time software is presented. An extended stochastic Petri net is employed to represent the synchronization structure of N versions of the software, where dependencies among versions are modeled through correlated sampling of module execution times. Test results utilizing specifications for NASA's planetary lander control software indicate that mutation-based testing could hold greater potential for enhancing reliability than the desirable but perhaps unachievable goal of independence among N versions.
Wireless data collection system for real-time arterial travel time estimates.
DOT National Transportation Integrated Search
2011-03-01
This project pursued several objectives conducive to the implementation and testing of a Bluetooth (BT) based system to collect travel time data, including the deployment of a BT-based travel time data collection system to perform comprehensive testi...
Defining Tsunami Magnitude as Measure of Potential Impact
NASA Astrophysics Data System (ADS)
Titov, V. V.; Tang, L.
2016-12-01
The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
Time providing care outside visits in a home-based primary care program.
Pedowitz, Elizabeth J; Ornstein, Katherine A; Farber, Jeffrey; DeCherrie, Linda V
2014-06-01
To assess how much time physicians in a large home-based primary care (HBPC) program spend providing care outside of home visits. Unreimbursed time and patient and provider-related factors that may contribute to that time were considered. Mount Sinai Visiting Doctors (MSVD) providers filled out research forms for every interaction involving care provision outside of home visits. Data collected included length of interaction, mode, nature, and with whom the interaction was for 3 weeks. MSVD, an academic home-visit program in Manhattan, New York. All primary care physicians (PCPs) in MSVD (n = 14) agreed to participate. Time data were analyzed using a comprehensive estimate and conservative estimates to quantify unbillable time. Data on 1,151 interactions for 537 patients were collected. An average 8.2 h/wk was spent providing nonhome visit care for a full-time provider. Using the most conservative estimates, 3.6 h/wk was estimated to be unreimbursed per full-time provider. No significant differences in interaction times were found between patients with and without dementia, new and established patients, and primary-panel and covered patients. Home-based primary care providers spend substantial time providing care outside home visits, much of which goes unrecognized in the current reimbursement system. These findings may help guide practice development and creation of new payment systems for HBPC and similar models of care. © 2014, Copyright the Authors Journal compilation © 2014, The American Geriatrics Society.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
ERIC Educational Resources Information Center
Huang, Tracy; Loft, Shayne; Humphreys, Michael S.
2014-01-01
"Time-based prospective memory" (PM) refers to performing intended actions at a future time. Participants with time-based PM tasks can be slower to perform ongoing tasks (costs) than participants without PM tasks because internal control is required to maintain the PM intention or to make prospective-timing estimates. However, external…
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Estimation of toll road users value of time
DOT National Transportation Integrated Search
2008-02-01
This research examines a new methodology for prospectively estimating the willingness of travelers to use a toll road by combining travel time saved with the income of the prospective customer base. The purpose of the research is to facilitate networ...
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A new approach to estimate time-to-cure from cancer registries data.
Boussari, Olayidé; Romain, Gaëlle; Remontet, Laurent; Bossard, Nadine; Mounier, Morgane; Bouvier, Anne-Marie; Binquet, Christine; Colonna, Marc; Jooste, Valérie
2018-04-01
Cure models have been adapted to net survival context to provide important indicators from population-based cancer data, such as the cure fraction and the time-to-cure. However existing methods for computing time-to-cure suffer from some limitations. Cure models in net survival framework were briefly overviewed and a new definition of time-to-cure was introduced as the time TTC at which P(t), the estimated covariate-specific probability of being cured at a given time t after diagnosis, reaches 0.95. We applied flexible parametric cure models to data of four cancer sites provided by the French network of cancer registries (FRANCIM). Then estimates of the time-to-cure by TTC and by two existing methods were derived and compared. Cure fractions and probabilities P(t) were also computed. Depending on the age group, TTC ranged from to 8 to 10 years for colorectal and pancreatic cancer and was nearly 12 years for breast cancer. In thyroid cancer patients under 55 years at diagnosis, TTC was strikingly 0: the probability of being cured was >0.95 just after diagnosis. This is an interesting result regarding the health insurance premiums of these patients. The estimated values of time-to-cure from the three approaches were close for colorectal cancer only. We propose a new approach, based on estimated covariate-specific probability of being cured, to estimate time-to-cure. Compared to two existing methods, the new approach seems to be more intuitive and natural and less sensitive to the survival time distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
Joint estimation of 2D-DOA and frequency based on space-time matrix and conformal array.
Wan, Liang-Tian; Liu, Lu-Tao; Si, Wei-Jian; Tian, Zuo-Xi
2013-01-01
Each element in the conformal array has a different pattern, which leads to the performance deterioration of the conventional high resolution direction-of-arrival (DOA) algorithms. In this paper, a joint frequency and two-dimension DOA (2D-DOA) estimation algorithm for conformal array are proposed. The delay correlation function is used to suppress noise. Both spatial and time sampling are utilized to construct the spatial-time matrix. The frequency and 2D-DOA estimation are accomplished based on parallel factor (PARAFAC) analysis without spectral peak searching and parameter pairing. The proposed algorithm needs only four guiding elements with precise positions to estimate frequency and 2D-DOA. Other instrumental elements can be arranged flexibly on the surface of the carrier. Simulation results demonstrate the effectiveness of the proposed algorithm.
New instantaneous frequency estimation method based on the use of image processing techniques
NASA Astrophysics Data System (ADS)
Borda, Monica; Nafornita, Ioan; Isar, Alexandru
2003-05-01
The aim of this paper is to present a new method for the estimation of the instantaneous frequency of a frequency modulated signal, corrupted by additive noise. This method represents an example of fusion of two theories: the time-frequency representations and the mathematical morphology. Any time-frequency representation of a useful signal is concentrated around its instantaneous frequency law and realizes the diffusion of the noise that perturbs the useful signal in the time - frequency plane. In this paper a new time-frequency representation, useful for the estimation of the instantaneous frequency, is proposed. This time-frequency representation is the product of two others time-frequency representations: the Wigner - Ville time-frequency representation and a new one obtained by filtering with a hard thresholding filter the Gabor representation of the signal to be processed. Using the image of this new time-frequency representation the instantaneous frequency of the useful signal can be extracted with the aid of some mathematical morphology operators: the conversion in binary form, the dilation and the skeleton. The simulations of the proposed method have proved its qualities. It is better than other estimation methods, like those based on the use of adaptive notch filters.
Perez, S Ivan; Tejedor, Marcelo F; Novo, Nelson M; Aristide, Leandro
2013-01-01
The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27-31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21-29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade.
Perez, S. Ivan; Tejedor, Marcelo F.; Novo, Nelson M.; Aristide, Leandro
2013-01-01
The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27–31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21–29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade. PMID:23826358
Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling
2014-01-01
Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089
The effects of sample size on population genomic analyses--implications for the tests of neutrality.
Subramanian, Sankar
2016-02-20
One of the fundamental measures of molecular genetic variation is the Watterson's estimator (θ), which is based on the number of segregating sites. The estimation of θ is unbiased only under neutrality and constant population growth. It is well known that the estimation of θ is biased when these assumptions are violated. However, the effects of sample size in modulating the bias was not well appreciated. We examined this issue in detail based on large-scale exome data and robust simulations. Our investigation revealed that sample size appreciably influences θ estimation and this effect was much higher for constrained genomic regions than that of neutral regions. For instance, θ estimated for synonymous sites using 512 human exomes was 1.9 times higher than that obtained using 16 exomes. However, this difference was 2.5 times for the nonsynonymous sites of the same data. We observed a positive correlation between the rate of increase in θ estimates (with respect to the sample size) and the magnitude of selection pressure. For example, θ estimated for the nonsynonymous sites of highly constrained genes (dN/dS < 0.1) using 512 exomes was 3.6 times higher than that estimated using 16 exomes. In contrast this difference was only 2 times for the less constrained genes (dN/dS > 0.9). The results of this study reveal the extent of underestimation owing to small sample sizes and thus emphasize the importance of sample size in estimating a number of population genomic parameters. Our results have serious implications for neutrality tests such as Tajima D, Fu-Li D and those based on the McDonald and Kreitman test: Neutrality Index and the fraction of adaptive substitutions. For instance, use of 16 exomes produced 2.4 times higher proportion of adaptive substitutions compared to that obtained using 512 exomes (24% vs 10 %).
Breen, Michael S.; Long, Thomas C.; Schultz, Bradley D.; Crooks, James; Breen, Miyuki; Langstaff, John E.; Isaacs, Kristin K.; Tan, Yu-Mei; Williams, Ronald W.; Cao, Ye; Geller, Andrew M.; Devlin, Robert B.; Batterman, Stuart A.; Buckley, Timothy J.
2014-01-01
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time–location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies. PMID:24619294
Multiunit Activity-Based Real-Time Limb-State Estimation from Dorsal Root Ganglion Recordings
Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan
2017-01-01
Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit activity data, and a dynamically driven recurrent neural network (DDRNN) was used to estimate ankle and knee joint angles. The multiunit activity-based MAV feature was sufficiently informative to estimate limb states, and the DDRNN showed a better decoding performance than conventional linear estimators. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474
Nonlinear Directed Interactions Between HRV and EEG Activity in Children With TLE.
Schiecke, Karin; Pester, Britta; Piper, Diana; Benninger, Franz; Feucht, Martha; Leistritz, Lutz; Witte, Herbert
2016-12-01
Epileptic seizure activity influences the autonomic nervous system (ANS) in different ways. Heart rate variability (HRV) is used as indicator for alterations of the ANS. It was shown that linear, nondirected interactions between HRV and EEG activity before, during, and after epileptic seizure occur. Accordingly, investigations of directed nonlinear interactions are logical steps to provide, e.g., deeper insight into the development of seizure onsets. Convergent cross mapping (CCM) investigates nonlinear, directed interactions between time series by using nonlinear state space reconstruction. CCM is applied to simulated and clinically relevant data, i.e., interactions between HRV and specific EEG components of children with temporal lobe epilepsy (TLE). In addition, time-variant multivariate Autoregressive model (AR)-based estimation of partial directed coherence (PDC) was performed for the same data. Influence of estimation parameters and time-varying behavior of CCM estimation could be demonstrated by means of simulated data. AR-based estimation of PDC failed for the investigation of our clinical data. Time-varying interval-based application of CCM on these data revealed directed interactions between HRV and delta-related EEG activity. Interactions between HRV and alpha-related EEG activity were visible but less pronounced. EEG components mainly drive HRV. The interaction pattern and directionality clearly changed with onset of seizure. Statistical relevant interactions were quantified by bootstrapping and surrogate data approach. In contrast to AR-based estimation of PDC CCM was able to reveal time-courses and frequency-selective views of nonlinear interactions for the further understanding of complex interactions between the epileptic network and the ANS in children with TLE.
Is there a single best estimator? selection of home range estimators using area- under- the-curve
Walter, W. David; Onorato, Dave P.; Fischer, Justin W.
2015-01-01
Comparisons of fit of home range contours with locations collected would suggest that use of VHF technology is not as accurate as GPS technology to estimate size of home range for large mammals. Estimators of home range collected with GPS technology performed better than those estimated with VHF technology regardless of estimator used. Furthermore, estimators that incorporate a temporal component (third-generation estimators) appeared to be the most reliable regardless of whether kernel-based or Brownian bridge-based algorithms were used and in comparison to first- and second-generation estimators. We defined third-generation estimators of home range as any estimator that incorporates time, space, animal-specific parameters, and habitat. Such estimators would include movement-based kernel density, Brownian bridge movement models, and dynamic Brownian bridge movement models among others that have yet to be evaluated.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Comparison of estimators for rolling samples using Forest Inventory and Analysis data
Devin S. Johnson; Michael S. Williams; Raymond L. Czaplewski
2003-01-01
The performance of three classes of weighted average estimators is studied for an annual inventory design similar to the Forest Inventory and Analysis program of the United States. The first class is based on an ARIMA(0,1,1) time series model. The equal weight, simple moving average is a member of this class. The second class is based on an ARIMA(0,2,2) time series...
Elting, L S; Rubenstein, E B; Rolston, K; Cantor, S B; Martin, C G; Kurtin, D; Rodriguez, S; Lam, T; Kanesan, K; Bodey, G
2000-11-01
To determine whether antibiotic regimens with similar rates of response differ significantly in the speed of response and to estimate the impact of this difference on the cost of febrile neutropenia. The time point of clinical response was defined by comparing the sensitivity, specificity, and predictive values of alternative objective and subjective definitions. Data from 488 episodes of febrile neutropenia, treated with either of two commonly used antibiotics (coded A or B) during six clinical trials, were pooled to compare the median time to clinical response, days of antibiotic therapy and hospitalization, and estimated costs. Response rates were similar; however, the median time to clinical response was significantly shorter with A-based regimens (5 days) compared with B-based regimens (7 days; P =.003). After 72 hours of therapy, 33% of patients who received A but only 18% of those who received B had responded (P =.01). These differences resulted in fewer days of antibiotic therapy and hospitalization with A-based regimens (7 and 9 days) compared with B-based regimens (9 and 12 days, respectively; P <.04) and in significantly lower estimated median costs ($8,491 v $11,133 per episode; P =.03). Early discharge at the time of clinical response should reduce the median cost from $10,752 to $8,162 (P <.001). Despite virtually identical rates of response, time to clinical response and estimated cost of care varied significantly among regimens. An early discharge strategy based on our definition of the time point of clinical response may further reduce the cost of treating non-low-risk patients with febrile neutropenia.
Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust
NASA Astrophysics Data System (ADS)
Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.
2014-12-01
We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Goff, M L; Win, B H
1997-11-01
The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.
A Coalescent-Based Estimator of Admixture From DNA Sequences
Wang, Jinliang
2006-01-01
A variety of estimators have been developed to use genetic marker information in inferring the admixture proportions (parental contributions) of a hybrid population. The majority of these estimators used allele frequency data, ignored molecular information that is available in markers such as microsatellites and DNA sequences, and assumed that mutations are absent since the admixture event. As a result, these estimators may fail to deliver an estimate or give rather poor estimates when admixture is ancient and thus mutations are not negligible. A previous molecular estimator based its inference of admixture proportions on the average coalescent times between pairs of genes taken from within and between populations. In this article I propose an estimator that considers the entire genealogy of all of the sampled genes and infers admixture proportions from the numbers of segregating sites in DNA sequence samples. By considering the genealogy of all sequences rather than pairs of sequences, this new estimator also allows the joint estimation of other interesting parameters in the admixture model, such as admixture time, divergence time, population size, and mutation rate. Comparative analyses of simulated data indicate that the new coalescent estimator generally yields better estimates of admixture proportions than the previous molecular estimator, especially when the parental populations are not highly differentiated. It also gives reasonably accurate estimates of other admixture parameters. A human mtDNA sequence data set was analyzed to demonstrate the method, and the analysis results are discussed and compared with those from previous studies. PMID:16624918
Choquette, Stéphane; Hamel, Mathieu; Boissy, Patrick
2008-01-01
Background It has been suggested that there is a dose-response relationship between the amount of therapy and functional recovery in post-acute rehabilitation care. To this day, only the total time of therapy has been investigated as a potential determinant of this dose-response relationship because of methodological and measurement challenges. The primary objective of this study was to compare time and motion measures during real life physical therapy with estimates of active time (i.e. the time during which a patient is active physically) obtained with a wireless body area network (WBAN) of 3D accelerometer modules positioned at the hip, wrist and ankle. The secondary objective was to assess the differences in estimates of active time when using a single accelerometer module positioned at the hip. Methods Five patients (77.4 ± 5.2 y) with 4 different admission diagnoses (stroke, lower limb fracture, amputation and immobilization syndrome) were recruited in a post-acute rehabilitation center and observed during their physical therapy sessions throughout their stay. Active time was recorded by a trained observer using a continuous time and motion analysis program running on a Tablet-PC. Two WBAN configurations were used: 1) three accelerometer modules located at the hip, wrist and ankle (M3) and 2) one accelerometer located at the hip (M1). Acceleration signals from the WBANs were synchronized with the observations. Estimates of active time were computed based on the temporal density of the acceleration signals. Results A total of 62 physical therapy sessions were observed. Strong associations were found between WBANs estimates of active time and time and motion measures of active time. For the combined sessions, the intraclass correlation coefficient (ICC) was 0.93 (P ≤ 0.001) for M3 and 0.79 (P ≤ 0.001) for M1. The mean percentage of differences between observation measures and estimates from the WBAN of active time was -8.7% ± 2.0% using data from M3 and -16.4% ± 10.4% using data from M1. Conclusion WBANs estimates of active time compare favorably with results from observation-based time and motion measures. While the investigation on the association between active time and outcomes of rehabilitation needs to be studied in a larger scale study, the use of an accelerometer-based WBAN to measure active time is a promising approach that offers a better overall precision than methods relying on work sampling. Depending on the accuracy needed, the use of a single accelerometer module positioned on the hip may still be an interesting alternative to using multiple modules. PMID:18764954
Journal: A Review of Some Tracer-Test Design Equations for ...
Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-
NASA Astrophysics Data System (ADS)
Swann, A. L. S.; Koven, C.; Lombardozzi, D.; Bonan, G. B.
2017-12-01
Evapotranspiration (ET) is a critical term in the surface energy budget as well as the water cycle. There are few direct measurements of ET, and thus the magnitude and variability is poorly constrained at large spatial scales. Estimates of the annual cycle of ET over the Amazon are critical because they influence predictions of the seasonal cycle of carbon fluxes, as well as atmospheric dynamics and circulation. We estimate ET for the Amazon basin using a water budget approach, by differencing rainfall, discharge, and time-varying storage from the Gravity Recovery and Climate Experiment. We find that the climatological annual cycle of ET over the Amazon basin upstream of Óbidos shows suppression of ET during the wet season, and higher ET during the dry season, consistent with flux tower based observations in seasonally dry forests. We also find a statistically significant decrease in ET over the time period 2002-2015 of -1.46 mm/yr. Our direct estimate of the seasonal cycle of ET is largely consistent with previous indirect estimates, including energy budget based approaches, an up-scaled station based estimate, and land surface model estimates, but suggests that suppression of ET during the wet season is underestimated by existing products. We further quantify possible contributors to the phasing of the seasonal cycle and downward time trend using land surface models.
Multifractals embedded in short time series: An unbiased estimation of probability moment
NASA Astrophysics Data System (ADS)
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-01-01
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm. PMID:29438317
Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.
Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong
2018-02-13
Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
A revised timescale for human evolution based on ancient mitochondrial genomes
Johnson, Philip L.F.; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G.; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2016-01-01
Summary Background Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Result Here we use mitochondrial genome sequences from 10 securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) of less than 62,000-95,000 years ago. Conclusion Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population split times, they can provide valid upper bounds; our results exclude most of the older dates for African and non-African split times recently suggested by de novo mutation rate estimates in the nuclear genome. PMID:23523248
Application of wavelet-based multi-model Kalman filters to real-time flood forecasting
NASA Astrophysics Data System (ADS)
Chou, Chien-Ming; Wang, Ru-Yih
2004-04-01
This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
Austin, Peter C
2018-01-01
Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.
NASA Astrophysics Data System (ADS)
Song, X. P.; Potapov, P.; Adusei, B.; King, L.; Khan, A.; Krylov, A.; Di Bella, C. M.; Pickens, A. H.; Stehman, S. V.; Hansen, M.
2016-12-01
Reliable and timely information on agricultural production is essential for ensuring world food security. Freely available medium-resolution satellite data (e.g. Landsat, Sentinel) offer the possibility of improved global agriculture monitoring. Here we develop and test a method for estimating in-season crop acreage using a probability sample of field visits and producing wall-to-wall crop type maps at national scales. The method is first illustrated for soybean cultivated area in the US for 2015. A stratified, two-stage cluster sampling design was used to collect field data to estimate national soybean area. The field-based estimate employed historical soybean extent maps from the U.S. Department of Agriculture (USDA) Cropland Data Layer to delineate and stratify U.S. soybean growing regions. The estimated 2015 U.S. soybean cultivated area based on the field sample was 341,000 km2 with a standard error of 23,000 km2. This result is 1.0% lower than USDA's 2015 June survey estimate and 1.9% higher than USDA's 2016 January estimate. Our area estimate was derived in early September, about 2 months ahead of harvest. To map soybean cover, the Landsat image archive for the year 2015 growing season was processed using an active learning approach. Overall accuracy of the soybean map was 84%. The field-based sample estimated area was then used to calibrate the map such that the soybean acreage of the map derived through pixel counting matched the sample-based area estimate. The strength of the sample-based area estimation lies in the stratified design that takes advantage of the spatially explicit cropland layers to construct the strata. The success of the mapping was built upon an automated system which transforms Landsat images into standardized time-series metrics. The developed method produces reliable and timely information on soybean area in a cost-effective way and could be implemented in an operational mode. The approach has also been applied for other crops in other regions, such as winter wheat in Pakistan, soybean in Argentina and soybean in the entire South America. Similar levels of accuracy and timeliness were achieved as in the US.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
Method of Enhancing On-Board State Estimation Using Communication Signals
NASA Technical Reports Server (NTRS)
Anzalone, Evan J. (Inventor); Chuang, Jason C. H. (Inventor)
2015-01-01
A method of enhancing on-board state estimation for a spacecraft utilizes a network of assets to include planetary-based assets and space-based assets. Communication signals transmitted from each of the assets into space are defined by a common protocol. Data is embedded in each communication signal transmitted by the assets. The data includes a time-of-transmission for a corresponding one of the communication signals and a position of a corresponding one of the assets at the time-of-transmission. A spacecraft is equipped to receive the communication signals, has a clock synchronized to the space-wide time reference frame, and has a processor programmed to generate state estimates of the spacecraft. Using its processor, the spacecraft determines a one-dimensional range from itself to at least one of the assets and then updates its state estimates using each one-dimensional range.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.
Yao, Yu; Zhao, Junhui; Wu, Lenan
2018-05-29
A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.
1997-01-01
Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.
Inertial sensor-based smoother for gait analysis.
Suh, Young Soo
2014-12-17
An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
How does spatial and temporal resolution of vegetation index impact crop yield estimation?
USDA-ARS?s Scientific Manuscript database
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation
NASA Astrophysics Data System (ADS)
Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.
2016-12-01
We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Sanford, Ward E.; Nelms, David L.; Pope, Jason P.; Selnick, David L.
2015-01-01
Mean long-term hydrologic budget components, such as recharge and base flow, are often difficult to estimate because they can vary substantially in space and time. Mean long-term fluxes were calculated in this study for precipitation, surface runoff, infiltration, total evapotranspiration (ET), riparian ET, recharge, base flow (or groundwater discharge) and net total outflow using long-term estimates of mean ET and precipitation and the assumption that the relative change in storage over that 30-year period is small compared to the total ET or precipitation. Fluxes of these components were first estimated on a number of real-time-gaged watersheds across Virginia. Specific conductance was used to distinguish and separate surface runoff from base flow. Specific-conductance (SC) data were collected every 15 minutes at 75 real-time gages for approximately 18 months between March 2007 and August 2008. Precipitation was estimated for 1971-2000 using PRISM climate data. Precipitation and temperature from the PRISM data were used to develop a regression-based relation to estimate total ET. The proportion of watershed precipitation that becomes surface runoff was related to physiographic province and rock type in a runoff regression equation. A new approach to estimate riparian ET using seasonal SC data gave results consistent with those from other methods. Component flux estimates from the watersheds were transferred to flux estimates for counties and independent cities using the ET and runoff regression equations. Only 48 of the 75 watersheds yielded sufficient data, and data from these 48 were used in the final runoff regression equation. Final results for the study are presented as component flux estimates for all counties and independent cities in Virginia. The method has the potential to be applied in many other states in the U.S. or in other regions or countries of the world where climate and stream flow data are plentiful.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Adaptive and Personalized Plasma Insulin Concentration Estimation for Artificial Pancreas Systems.
Hajizadeh, Iman; Rashid, Mudassir; Samadi, Sediqeh; Feng, Jianyuan; Sevil, Mert; Hobbs, Nicole; Lazaro, Caterina; Maloney, Zacharie; Brandt, Rachel; Yu, Xia; Turksoy, Kamuran; Littlejohn, Elizabeth; Cengiz, Eda; Cinar, Ali
2018-05-01
The artificial pancreas (AP) system, a technology that automatically administers exogenous insulin in people with type 1 diabetes mellitus (T1DM) to regulate their blood glucose concentrations, necessitates the estimation of the amount of active insulin already present in the body to avoid overdosing. An adaptive and personalized plasma insulin concentration (PIC) estimator is designed in this work to accurately quantify the insulin present in the bloodstream. The proposed PIC estimation approach incorporates Hovorka's glucose-insulin model with the unscented Kalman filtering algorithm. Methods for the personalized initialization of the time-varying model parameters to individual patients for improved estimator convergence are developed. Data from 20 three-days-long closed-loop clinical experiments conducted involving subjects with T1DM are used to evaluate the proposed PIC estimation approach. The proposed methods are applied to the clinical data containing significant disturbances, such as unannounced meals and exercise, and the results demonstrate the accurate real-time estimation of the PIC with the root mean square error of 7.15 and 9.25 mU/L for the optimization-based fitted parameters and partial least squares regression-based testing parameters, respectively. The accurate real-time estimation of PIC will benefit the AP systems by preventing overdelivery of insulin when significant insulin is present in the bloodstream.
Ding, Xiaorong; Zhang, Yuanting; Tsang, Hon Ki
2016-02-01
Continuous blood pressure (BP) measurement without a cuff is advantageous for the early detection and prevention of hypertension. The pulse transit time (PTT) method has proven to be promising for continuous cuffless BP measurement. However, the problem of accuracy is one of the most challenging aspects before the large-scale clinical application of this method. Since PTT-based BP estimation relies primarily on the relationship between PTT and BP under certain assumptions, estimation accuracy will be affected by cardiovascular disorders that impair this relationship and by the calibration frequency, which may violate these assumptions. This study sought to examine the impact of heart disease and the calibration interval on the accuracy of PTT-based BP estimation. The accuracy of a PTT-BP algorithm was investigated in 37 healthy subjects and 48 patients with heart disease at different calibration intervals, namely 15 min, 2 weeks, and 1 month after initial calibration. The results showed that the overall accuracy of systolic BP estimation was significantly lower in subjects with heart disease than in healthy subjects, but diastolic BP estimation was more accurate in patients than in healthy subjects. The accuracy of systolic and diastolic BP estimation becomes less reliable with longer calibration intervals. These findings demonstrate that both heart disease and the calibration interval can influence the accuracy of PTT-based BP estimation and should be taken into consideration to improve estimation accuracy.
NASA Astrophysics Data System (ADS)
Nagol, J. R.; Chung, C.; Dempewolf, J.; Maurice, S.; Mbungu, W.; Tumbo, S.
2015-12-01
Timely mapping and monitoring of crops like Maize, an important food security crop in Tanzania, can facilitate timely response by government and non-government organizations to food shortage or surplus conditions. Small UAVs can play an important role in linking the spaceborne remote sensing data and ground based measurement to improve the calibration and validation of satellite based estimates of in-season crop metrics. In Tanzania most of the growing season is often obscured by clouds. UAV data, if collected within a stratified statistical sampling framework, can also be used to directly in lieu of spaceborne data to infer mid-season yield estimates at regional scales.Here we present an object based approach to estimate crop metrics like crop type, area, and height using multi-temporal UAV based imagery. The methods were tested at three 1km2 plots in Kilosa, Njombe, and Same districts in Tanzania. At these sites both ground based and UAV based data were collected on a monthly time-step during the year 2015 growing season. SenseFly eBee drone with RGB and NIR-R-G camera was used to collect data. Crop type classification accuracies of above 85% were easily achieved.
A diagnostic model to estimate winds and small-scale drag from Mars Observer PMIRR data
NASA Technical Reports Server (NTRS)
Barnes, J. R.
1993-01-01
Theoretical and modeling studies indicate that small-scale drag due to breaking gravity waves is likely to be of considerable importance for the circulation in the middle atmospheric region (approximately 40-100 km altitude) on Mars. Recent earth-based spectroscopic observations have provided evidence for the existence of circulation features, in particular, a warm winter polar region, associated with gravity wave drag. Since the Mars Observer PMIRR experiment will obtain temperature profiles extending from the surface up to about 80 km altitude, it will be extensively sampling middle atmospheric regions in which gravity wave drag may play a dominant role. Estimating the drag then becomes crucial to the estimation of the atmospheric winds from the PMIRR-observed temperatures. An interative diagnostic model based upon one previously developed and tested with earth satellite temperature data will be applied to the PMIRR measurements to produce estimates of the small-scale zonal drag and three-dimensional wind fields in the Mars middle atmosphere. This model is based on the primitive equations, and can allow for time dependence (the time tendencies used may be based upon those computed in a Fast Fourier Mapping procedure). The small-scale zonal drag is estimated as the residual in the zonal momentum equation; the horizontal winds having first been estimated from the meridional momentum equation and the continuity equation. The scheme estimates the vertical motions from the thermodynamic equation, and thus needs estimates of the diabatic heating based upon the observed temperatures. The latter will be generated using a radiative model. It is hoped that the diagnostic scheme will be able to produce good estimates of the zonal gravity wave drag in the Mars middle atmosphere, estimates that can then be used in other diagnostic or assimilation efforts, as well as more theoretical studies.
Accuracy and Precision of USNO GPS Carrier-Phase Time Transfer
2010-01-01
values. Comparison measures used include estimates obtained from two-way satellite time/frequency transfer ( TWSTFT ), and GPS-based estimates obtained...the IGS are used as a benchmark in the computation. Frequency values have a few times 10 -15 fractional frequency uncertainty. TWSTFT values confirm...obtained from two-way satellite time/frequency transfer ( TWSTFT ), BIPM Circular T, and the International GNSS Service (IGS). At present, it is known that
Lee, Chi Hyun; Luo, Xianghua; Huang, Chiung-Yu; DeFor, Todd E; Brunstein, Claudio G; Weisdorf, Daniel J
2016-06-01
Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this article, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. © 2015, The International Biometric Society.
Lee, Chi Hyun; Huang, Chiung-Yu; DeFor, Todd E.; Brunstein, Claudio G.; Weisdorf, Daniel J.
2015-01-01
Summary Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this paper, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. PMID:26575402
Cycle-time equations for five small tractors operating in low-volume small-diameter hardwood stands
Chris B. LeDoux; Neil K. Huyler; Neil K. Huyler
1992-01-01
Prediction equations for estimating cycle time were developed for five small tractors studied under various silvicultural treatments and operating conditions. The tractors studied included the Pasquali 933, a Holder A60F, a Forest Ant Forwarder (Skogsman), a Massey-Ferguson, and a Sam4 Minitarus. Skidding costs were estimated based on the cycle-time equations. Using...
Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements
NASA Astrophysics Data System (ADS)
Jakub, Thomas D.
Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery
2016-10-01
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Estimating Agricultural Nitrous Oxide Emissions
USDA-ARS?s Scientific Manuscript database
Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...
A new, long-term daily satellite-based rainfall dataset for operational monitoring in Africa
NASA Astrophysics Data System (ADS)
Maidment, Ross I.; Grimes, David; Black, Emily; Tarnavsky, Elena; Young, Matthew; Greatrex, Helen; Allan, Richard P.; Stein, Thorwald; Nkonde, Edson; Senkunda, Samuel; Alcántara, Edgar Misael Uribe
2017-05-01
Rainfall information is essential for many applications in developing countries, and yet, continually updated information at fine temporal and spatial scales is lacking. In Africa, rainfall monitoring is particularly important given the close relationship between climate and livelihoods. To address this information gap, this paper describes two versions (v2.0 and v3.0) of the TAMSAT daily rainfall dataset based on high-resolution thermal-infrared observations, available from 1983 to the present. The datasets are based on the disaggregation of 10-day (v2.0) and 5-day (v3.0) total TAMSAT rainfall estimates to a daily time-step using daily cold cloud duration. This approach provides temporally consistent historic and near-real time daily rainfall information for all of Africa. The estimates have been evaluated using ground-based observations from five countries with contrasting rainfall climates (Mozambique, Niger, Nigeria, Uganda, and Zambia) and compared to other satellite-based rainfall estimates. The results indicate that both versions of the TAMSAT daily estimates reliably detects rainy days, but have less skill in capturing rainfall amount—results that are comparable to the other datasets.
Li, Zhan; Guiraud, David; Andreu, David; Benoussaad, Mourad; Fattal, Charles; Hayashibe, Mitsuhiro
2016-06-22
Functional electrical stimulation (FES) is a neuroprosthetic technique for restoring lost motor function of spinal cord injured (SCI) patients and motor-impaired subjects by delivering short electrical pulses to their paralyzed muscles or motor nerves. FES induces action potentials respectively on muscles or nerves so that muscle activity can be characterized by the synchronous recruitment of motor units with its compound electromyography (EMG) signal is called M-wave. The recorded evoked EMG (eEMG) can be employed to predict the resultant joint torque, and modeling of FES-induced joint torque based on eEMG is an essential step to provide necessary prediction of the expected muscle response before achieving accurate joint torque control by FES. Previous works on FES-induced torque tracking issues were mainly based on offline analysis. However, toward personalized clinical rehabilitation applications, real-time FES systems are essentially required considering the subject-specific muscle responses against electrical stimulation. This paper proposes a wireless portable stimulator used for estimating/predicting joint torque based on real time processing of eEMG. Kalman filter and recurrent neural network (RNN) are embedded into the real-time FES system for identification and estimation. Prediction results on 3 able-bodied subjects and 3 SCI patients demonstrate promising performances. As estimators, both Kalman filter and RNN approaches show clinically feasible results on estimation/prediction of joint torque with eEMG signals only, moreover RNN requires less computational requirement. The proposed real-time FES system establishes a platform for estimating and assessing the mechanical output, the electromyographic recordings and associated models. It will contribute to open a new modality for personalized portable neuroprosthetic control toward consolidated personal healthcare for motor-impaired patients.
Estimating equations estimates of trends
Link, W.A.; Sauer, J.R.
1994-01-01
The North American Breeding Bird Survey monitors changes in bird populations through time using annual counts at fixed survey sites. The usual method of estimating trends has been to use the logarithm of the counts in a regression analysis. It is contended that this procedure is reasonably satisfactory for more abundant species, but produces biased estimates for less abundant species. An alternative estimation procedure based on estimating equations is presented.
A Preliminary Examination of the Second Generation CMORPH Real-time Production
NASA Astrophysics Data System (ADS)
Joyce, R.; Xie, P.; Wu, S.
2017-12-01
The second generation CMORPH (CMORPH2) has started test real-time production of 30-minute precipitation estimates on a 0.05olat/lon grid over the entire globe, from pole-to-pole. The CMORPH2 is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) and LEO platforms, and precipitation simulations from the NCEP operational global forecast system (GFS). Inputs from the various sources are first inter-calibrated to ensure quantitative consistencies in representing precipitation events of different intensities through PDF calibration against a common reference standard. The inter-calibrated PMW retrievals and IR-based precipitation estimates are then propagated from their respective observation times to the target analysis time along the motion vectors of the precipitating clouds. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the GFS precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. The propagated PMW and IR based precipitation estimates are finally integrated into a single field of global precipitation through the Kalman Filter framework. A set of procedures have been established to examine the performance of the CMORPH2 real-time production. CMORPH2 satellite precipitation estimates are compared against the CPC daily gauge analysis, Stage IV radar precipitation over the CONUS, and numerical model forecasts to discover potential shortcomings and quantify improvements against the first generation CMORPH. Special attention has been focused on the CMORPH behavior over high-latitude areas beyond the coverage of the first generation CMORPH. Detailed results will be reported at the AGU.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rucci, A.; Vasco, D.W.; Novali, F.
2010-04-01
Deformation in the overburden proves useful in deducing spatial and temporal changes in the volume of a producing reservoir. Based upon these changes we estimate diffusive travel times associated with the transient flow due to production, and then, as the solution of a linear inverse problem, the effective permeability of the reservoir. An advantage an approach based upon travel times, as opposed to one based upon the amplitude of surface deformation, is that it is much less sensitive to the exact geomechanical properties of the reservoir and overburden. Inequalities constrain the inversion, under the assumption that the fluid production onlymore » results in pore volume decreases within the reservoir. We apply the formulation to satellite-based estimates of deformation in the material overlying a thin gas production zone at the Krechba field in Algeria. The peak displacement after three years of gas production is approximately 0.5 cm, overlying the eastern margin of the anticlinal structure defining the gas field. Using data from 15 irregularly-spaced images of range change, we calculate the diffusive travel times associated with the startup of a gas production well. The inequality constraints are incorporated into the estimates of model parameter resolution and covariance, improving the resolution by roughly 30 to 40%.« less
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.
Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre
2018-03-01
We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.
Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L
2011-09-15
Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.
Borghese, Michael M; Janssen, Ian
2018-03-22
Children participate in four main types of physical activity: organized sport, active travel, outdoor active play, and curriculum-based physical activity. The objective of this study was to develop a valid approach that can be used to concurrently measure time spent in each of these types of physical activity. Two samples (sample 1: n = 50; sample 2: n = 83) of children aged 10-13 wore an accelerometer and a GPS watch continuously over 7 days. They also completed a log where they recorded the start and end times of organized sport sessions. Sample 1 also completed an outdoor time log where they recorded the times they went outdoors and a description of the outdoor activity. Sample 2 also completed a curriculum log where they recorded times they participated in physical activity (e.g., physical education) during class time. We describe the development of a measurement approach that can be used to concurrently assess the time children spend participating in specific types of physical activity. The approach uses a combination of data from accelerometers, GPS, and activity logs and relies on merging and then processing these data using several manual (e.g., data checks and cleaning) and automated (e.g., algorithms) procedures. In the new measurement approach time spent in organized sport is estimated using the activity log. Time spent in active travel is estimated using an existing algorithm that uses GPS data. Time spent in outdoor active play is estimated using an algorithm (with a sensitivity and specificity of 85%) that was developed using data collected in sample 1 and which uses all of the data sources. Time spent in curriculum-based physical activity is estimated using an algorithm (with a sensitivity of 78% and specificity of 92%) that was developed using data collected in sample 2 and which uses accelerometer data collected during class time. There was evidence of excellent intra- and inter-rater reliability of the estimates for all of these types of physical activity when the manual steps were duplicated. This novel measurement approach can be used to estimate the time that children participate in different types of physical activity.
Adamski, Alys; Bertolli, Jeanne; Castañeda-Orjuela, Carlos; Devine, Owen J; Johansson, Michael A; Duarte, Maritza Adegnis Gonzalez; Farr, Sherry L; Tinker, Sarah C; Reyes, Marcela Maria Mercado; Tong, Van T; Garcia, Oscar Eduardo Pacheco; Valencia, Diana; Ortiz, Diego Alberto Cuellar; Honein, Margaret A; Jamieson, Denise J; Martínez, Martha Lucía Ospina; Gilboa, Suzanne M
2018-06-01
Colombia experienced a Zika virus (ZIKV) outbreak in 2015-2016. To assist with planning for medical and supportive services for infants affected by prenatal ZIKV infection, we used a model to estimate the number of pregnant women infected with ZIKV and the number of infants with congenital microcephaly from August 2015 to August 2017. We used nationally reported cases of symptomatic ZIKV disease among pregnant women and information from the literature on the percent of asymptomatic infections to estimate the number of pregnant women with ZIKV infection occurring August 2015-December 2016. We then estimated the number of infants with congenital microcephaly expected to occur August 2015-August 2017. To compare to the observed counts of infants with congenital microcephaly due to all causes reported through the national birth defects surveillance system, the model was time limited to produce estimates for February-November 2016. We estimated 1140-2160 (interquartile range [IQR]) infants with congenital microcephaly in Colombia, during August 2015-August 2017, whereas 340-540 infants with congenital microcephaly would be expected in the absence of ZIKV. Based on the time limited version of the model, for February-November 2016, we estimated 650-1410 infants with congenital microcephaly in Colombia. The 95% uncertainty interval for the latter estimate encompasses the 476 infants with congenital microcephaly reported during that approximate time frame based on national birth defects surveillance. Based on modeled estimates, ZIKV infection during pregnancy in Colombia could lead to 3-4 times as many infants with congenital microcephaly in 2015-2017 as would have been expected in the absence of the ZIKV outbreak. This publication was made possible through support provided by the Bureau for Global Health, U.S. Agency for International Development under the terms of an Interagency Agreement with Centers for Disease Control and Prevention. Published by Elsevier Ltd.
Treatment of dissociative disorders and reported changes in inpatient and outpatient cost estimates.
Myrick, Amie C; Webermann, Aliya R; Langeland, Willemien; Putnam, Frank W; Brand, Bethany L
2017-01-01
Background: Interpersonal trauma and trauma-related disorders cost society billions of dollars each year. Because of chronic and severe trauma histories, dissociative disorder (DD) patients spend many years in the mental health system, yet there is limited knowledge about the economic burden associated with DDs. Objective: The current study sought to determine how receiving specialized treatment would relate to estimated costs of inpatient and outpatient mental health services. Method: Patients' and individual therapists' reports of inpatient hospitalization days and outpatient treatment sessions were converted into US dollars. DD patients and their clinicians reported on use of inpatient and outpatient services four times over 30 months as part of a larger, naturalistic, international DD treatment study. The baseline sample included 292 clinicians and 280 patients; at the 30-month follow-up, 135 clinicians and 111 patients. Missing data were replaced in analyses to maintain adequate statistical power. The substantial attrition rate (>50%) should be considered in interpreting findings. Results: Longitudinal and cross-sectional analyses of cost estimates based on patient reported inpatient hospitalization significantly decreased over time. Longitudinal cost estimates based on clinician-reported outpatient services also significantly decreased over time. Cross-sectional cost estimates based on patient and clinician reported inpatient hospitalization were significantly lower for patients in later stages of treatment compared to those struggling with safety and stabilization. Cross-sectional cost estimates based on clinician-reported outpatient services were significantly lower for patients in later stages of treatment compared to those in early stages. Conclusions: This pattern of longitudinal and cross-sectional reductions in inpatient and outpatient costs, as reported by both patients and therapists, suggests that DD treatment may be associated with reduced inpatient and outpatient costs over time. Although these preliminary results show decreased mental health care utilization and associated estimated costs, it is not clear whether it was treatment that caused these important changes.
Treatment of dissociative disorders and reported changes in inpatient and outpatient cost estimates
Myrick, Amie C.; Webermann, Aliya R.; Langeland, Willemien; Putnam, Frank W.; Brand, Bethany L.
2017-01-01
ABSTRACT Background: Interpersonal trauma and trauma-related disorders cost society billions of dollars each year. Because of chronic and severe trauma histories, dissociative disorder (DD) patients spend many years in the mental health system, yet there is limited knowledge about the economic burden associated with DDs. Objective: The current study sought to determine how receiving specialized treatment would relate to estimated costs of inpatient and outpatient mental health services. Method: Patients’ and individual therapists’ reports of inpatient hospitalization days and outpatient treatment sessions were converted into US dollars. DD patients and their clinicians reported on use of inpatient and outpatient services four times over 30 months as part of a larger, naturalistic, international DD treatment study. The baseline sample included 292 clinicians and 280 patients; at the 30-month follow-up, 135 clinicians and 111 patients. Missing data were replaced in analyses to maintain adequate statistical power. The substantial attrition rate (>50%) should be considered in interpreting findings. Results: Longitudinal and cross-sectional analyses of cost estimates based on patient reported inpatient hospitalization significantly decreased over time. Longitudinal cost estimates based on clinician-reported outpatient services also significantly decreased over time. Cross-sectional cost estimates based on patient and clinician reported inpatient hospitalization were significantly lower for patients in later stages of treatment compared to those struggling with safety and stabilization. Cross-sectional cost estimates based on clinician-reported outpatient services were significantly lower for patients in later stages of treatment compared to those in early stages. Conclusions: This pattern of longitudinal and cross-sectional reductions in inpatient and outpatient costs, as reported by both patients and therapists, suggests that DD treatment may be associated with reduced inpatient and outpatient costs over time. Although these preliminary results show decreased mental health care utilization and associated estimated costs, it is not clear whether it was treatment that caused these important changes. PMID:29038681
NASA Astrophysics Data System (ADS)
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.
Method and apparatus for measurement of orientation in an anisotropic medium
Gilmore, Robert Snee; Kline, Ronald Alan; Deaton, Jr., John Broddus
1999-01-01
A method and apparatus are provided for simultaneously measuring the anisotropic orientation and the thickness of an article. The apparatus comprises a transducer assembly which propagates longitudinal and transverse waves through the article and which receives reflections of the waves. A processor is provided to measure respective transit times of the longitudinal and shear waves propagated through the article and to calculate respective predicted transit times of the longitudinal and shear waves based on an estimated thickness, an estimated anisotropic orientation, and an elasticity of the article. The processor adjusts the estimated thickness and the estimated anisotropic orientation to reduce the difference between the measured transit times and the respective predicted transit times of the longitudinal and shear waves.
The problem of estimating recent genetic connectivity in a changing world.
Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen
2017-02-01
Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence
Claggett, B.; Lagakos, S.W.; Wang, R.
2011-01-01
Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904
Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.
Claggett, B; Lagakos, S W; Wang, R
2012-03-01
Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.
Archfield, Stacey A.; Steeves, Peter A.; Guthrie, John D.; Ries, Kernell G.
2013-01-01
Streamflow information is critical for addressing any number of hydrologic problems. Often, streamflow information is needed at locations that are ungauged and, therefore, have no observations on which to base water management decisions. Furthermore, there has been increasing need for daily streamflow time series to manage rivers for both human and ecological functions. To facilitate negotiation between human and ecological demands for water, this paper presents the first publicly available, map-based, regional software tool to estimate historical, unregulated, daily streamflow time series (streamflow not affected by human alteration such as dams or water withdrawals) at any user-selected ungauged river location. The map interface allows users to locate and click on a river location, which then links to a spreadsheet-based program that computes estimates of daily streamflow for the river location selected. For a demonstration region in the northeast United States, daily streamflow was, in general, shown to be reliably estimated by the software tool. Estimating the highest and lowest streamflows that occurred in the demonstration region over the period from 1960 through 2004 also was accomplished but with more difficulty and limitations. The software tool provides a general framework that can be applied to other regions for which daily streamflow estimates are needed.
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis
Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.
2015-01-01
Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324
Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo
2014-01-01
We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.
1991-03-01
ocean acoustic tomography. A straightforward method of arrival time estimation, based on locating the maximum value of an interpolated arrival, was...used with limited success for analysis of data from the December 1988 Monterey Bay Tomography Experiment. Close examination of the data revealed multiple...estimation of arrival times along an ocean acoustic ray path is an important component of ocean acoustic tomography. A straightforward method of arrival time
NASA Astrophysics Data System (ADS)
Karbalaee, Negar; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan
2017-04-01
This study explores using Passive Microwave (PMW) rainfall estimation for spatial and temporal adjustment of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS). The PERSIANN-CCS algorithm collects information from infrared images to estimate rainfall. PERSIANN-CCS is one of the algorithms used in the Integrated Multisatellite Retrievals for GPM (Global Precipitation Mission) estimation for the time period PMW rainfall estimations are limited or not available. Continued improvement of PERSIANN-CCS will support Integrated Multisatellite Retrievals for GPM for current as well as retrospective estimations of global precipitation. This study takes advantage of the high spatial and temporal resolution of GEO-based PERSIANN-CCS estimation and the more effective, but lower sample frequency, PMW estimation. The Probability Matching Method (PMM) was used to adjust the rainfall distribution of GEO-based PERSIANN-CCS toward that of PMW rainfall estimation. The results show that a significant improvement of global PERSIANN-CCS rainfall estimation is obtained.
The Cost of Penicillin Allergy Evaluation.
Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P
2017-09-22
Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.
2011-01-01
Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational method in terms of excess rainfall (the excess rational method). Both the unit hydrograph method and excess rational method are shown to provide similar estimates of peak and time of peak streamflow. The results from the two methods can be combined by using arithmetic means. A nomograph is provided that shows the respective relations between the arithmetic-mean peak and time of peak streamflow to drainage areas ranging from 10 to 640 acres. The nomograph also shows the respective relations for selected BDF ranging from undeveloped to fully developed conditions. The nomograph represents the peak streamflow for 1 inch of excess rainfall based on drainage area and BDF; the peak streamflow for design storms from the nomograph can be multiplied by the excess rainfall to estimate peak streamflow. Time of peak streamflow is readily obtained from the nomograph. Therefore, given excess rainfall values derived from watershed-loss models, which are beyond the scope of this report, the nomograph represents a method for estimating peak and time of peak streamflow for applicable watersheds in the Houston metropolitan area. Lastly, analysis of the relative influence of BDF on peak streamflow is provided, and the results indicate a 0:04log10 cubic feet per second change of peak streamflow per positive unit of change in BDF. This relative change can be used to adjust peak streamflow from the method or other hydrologic methods for a given BDF to other BDF values; example computations are provided.
ERIC Educational Resources Information Center
Williams, David; Boucher, Jill; Lind, Sophie; Jarrold, Christopher
2013-01-01
Prospective memory (remembering to carry out an action in the future) has been studied relatively little in ASD. We explored time-based (carry out an action at a pre-specified time) and event-based (carry out an action upon the occurrence of a pre-specified event) prospective memory, as well as possible cognitive correlates, among 21…
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won
2015-11-13
The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test.
Byun, Yeun-Sub; Jeong, Rag-Gyo; Kang, Seok-Won
2015-01-01
The real-time recognition of absolute (or relative) position and orientation on a network of roads is a core technology for fully automated or driving-assisted vehicles. This paper presents an empirical investigation of the design, implementation, and evaluation of a self-positioning system based on a magnetic marker reference sensing method for an autonomous vehicle. Specifically, the estimation accuracy of the magnetic sensing ruler (MSR) in the up-to-date estimation of the actual position was successfully enhanced by compensating for time delays in signal processing when detecting the vertical magnetic field (VMF) in an array of signals. In this study, the signal processing scheme was developed to minimize the effects of the distortion of measured signals when estimating the relative positional information based on magnetic signals obtained using the MSR. In other words, the center point in a 2D magnetic field contour plot corresponding to the actual position of magnetic markers was estimated by tracking the errors between pre-defined reference models and measured magnetic signals. The algorithm proposed in this study was validated by experimental measurements using a test vehicle on a pilot network of roads. From the results, the positioning error was found to be less than 0.04 m on average in an operational test. PMID:26580622
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.
Dai, Chenyun; Zheng, Yang; Hu, Xiaogang
2018-01-01
Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.
Optimal estimation of diffusion coefficients from single-particle trajectories
NASA Astrophysics Data System (ADS)
Vestergaard, Christian L.; Blainey, Paul C.; Flyvbjerg, Henrik
2014-02-01
How does one optimally determine the diffusion coefficient of a diffusing particle from a single-time-lapse recorded trajectory of the particle? We answer this question with an explicit, unbiased, and practically optimal covariance-based estimator (CVE). This estimator is regression-free and is far superior to commonly used methods based on measured mean squared displacements. In experimentally relevant parameter ranges, it also outperforms the analytically intractable and computationally more demanding maximum likelihood estimator (MLE). For the case of diffusion on a flexible and fluctuating substrate, the CVE is biased by substrate motion. However, given some long time series and a substrate under some tension, an extended MLE can separate particle diffusion on the substrate from substrate motion in the laboratory frame. This provides benchmarks that allow removal of bias caused by substrate fluctuations in CVE. The resulting unbiased CVE is optimal also for short time series on a fluctuating substrate. We have applied our estimators to human 8-oxoguanine DNA glycolase proteins diffusing on flow-stretched DNA, a fluctuating substrate, and found that diffusion coefficients are severely overestimated if substrate fluctuations are not accounted for.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Mortality estimation from carcass searches using the R-package carcass: a tutorial
Korner-Nievergelt, Fränzi; Behr, Oliver; Brinkmann, Robert; Etterson, Matthew A.; Huso, Manuela M. P.; Dalthorp, Daniel; Korner-Nievergelt, Pius; Roth, Tobias; Niermann, Ivo
2015-01-01
This article is a tutorial for the R-package carcass. It starts with a short overview of common methods used to estimate mortality based on carcass searches. Then, it guides step by step through a simple example. First, the proportion of animals that fall into the search area is estimated. Second, carcass persistence time is estimated based on experimental data. Third, searcher efficiency is estimated. Fourth, these three estimated parameters are combined to obtain the probability that an animal killed is found by an observer. Finally, this probability is used together with the observed number of carcasses found to obtain an estimate for the total number of killed animals together with a credible interval.
Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.
Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E
2017-04-04
The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.
Meng, Yu; Li, Gang; Gao, Yaozong; Lin, Weili; Shen, Dinggang
2016-11-01
Longitudinal neuroimaging analysis of the dynamic brain development in infants has received increasing attention recently. Many studies expect a complete longitudinal dataset in order to accurately chart the brain developmental trajectories. However, in practice, a large portion of subjects in longitudinal studies often have missing data at certain time points, due to various reasons such as the absence of scan or poor image quality. To make better use of these incomplete longitudinal data, in this paper, we propose a novel machine learning-based method to estimate the subject-specific, vertex-wise cortical morphological attributes at the missing time points in longitudinal infant studies. Specifically, we develop a customized regression forest, named dynamically assembled regression forest (DARF), as the core regression tool. DARF ensures the spatial smoothness of the estimated maps for vertex-wise cortical morphological attributes and also greatly reduces the computational cost. By employing a pairwise estimation followed by a joint refinement, our method is able to fully exploit the available information from both subjects with complete scans and subjects with missing scans for estimation of the missing cortical attribute maps. The proposed method has been applied to estimating the dynamic cortical thickness maps at missing time points in an incomplete longitudinal infant dataset, which includes 31 healthy infant subjects, each having up to five time points in the first postnatal year. The experimental results indicate that our proposed framework can accurately estimate the subject-specific vertex-wise cortical thickness maps at missing time points, with the average error less than 0.23 mm. Hum Brain Mapp 37:4129-4147, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Austin, Peter C; Schuster, Tibor
2016-10-01
Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.
Progress on the CWU READI Analysis Center
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.
2015-12-01
Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Short term evaluation of harvesting systems for ecosystem management
Michael D. Erickson; Penn Peters; Curt Hassler
1995-01-01
Continuous time/motion studies have traditionally been the basis for productivity estimates of timber harvesting systems. The detailed data from such studies permits the researcher or analyst to develop mathematical relationships based on stand, system, and stem attributes for describing machine cycle times. The resulting equation(s) allow the analyst to estimate...
USDA-ARS?s Scientific Manuscript database
A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...
IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.
Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho
2016-02-05
Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.
Soares, André E R; Schrago, Carlos G
2015-01-07
Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
2016-12-06
direction and speed based on cost minimization and best estimated time of arrival (ETA). Sometimes, ships are forced to travel 43 Lehigh Technical...the allowable time to complete the travel . Another important aspect, addressed in the case study, is to investigate the optimal routing of aged...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing
The relation of radar to cloud area-time integrals and implications for rain measurements from space
NASA Technical Reports Server (NTRS)
Atlas, David; Bell, Thomas L.
1992-01-01
The relationships between satellite-based and radar-measured area-time integrals (ATI) for convective storms are determined, and both are shown to depend on the climatological conditional mean rain rate and the ratio of the measured cloud area to the actual rain area of the storms. The GOES precipitation index of Arkin (1986) for convective storms, an area-time integral for satellite cloud areas, is shown to be related to the ATI for radar-observed rain areas. The quality of GPI-based rainfall estimates depends on how well the cloud area is related to the rain area and the size of the sampling domain. It is also noted that the use of a GOES cloud ATI in conjunction with the radar area-time integral will improve the accuracy of rainfall estimates and allow such estimates to be made in much smaller space-time domains than the 1-month and 5-deg boxes anticipated for the Tropical Rainfall Measuring Mission.
Influence of mobile phone traffic on base station exposure of the general public.
Joseph, Wout; Verloock, Leen
2010-11-01
The influence of mobile phone traffic on temporal radiofrequency exposure due to base stations during 7 d is compared for five different sites with Erlang data (representing average mobile phone traffic intensity during a period of time). The time periods of high exposure and high traffic during a day are compared and good agreement is obtained. The minimal required measurement periods to obtain accurate estimates for maximal and average long-period exposure (7 d) are determined. It is shown that these periods may be very long, indicating the necessity of new methodologies to estimate maximal and average exposure from short-period measurement data. Therefore, a new method to calculate the fields at a time instant from fields at another time instant using normalized Erlang values is proposed. This enables the estimation of maximal and average exposure during a week from short-period measurements using only Erlang data and avoids the necessity of long measurement times.
Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution
Park, Yeonseok; Choi, Anthony
2017-01-01
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625
NASA Astrophysics Data System (ADS)
Hiramatsu, K.; Matsui, T.; Ito, A.; Miyakita, T.; Osada, Y.; Yamamoto, T.
2004-10-01
Aircraft noise measurements were recorded at the residential areas in the vicinity of Kadena Air Base, Okinawa in 1968 and 1972 at the time of the Vietnam war. The estimated equivalent continuous A-weighted sound pressure level LAeq for 24 h was 85 dB.The time history of sound level during 24 h was estimated from the measurement conducted in 1968, and the sound level was converted into the spectrum level at the centre frequency of the critical band of temporary threshold shift (TTS) using the results of spectrum analysis of aircraft noise operated at the airfield. With the information of spectrum level and its time history, TTS was calculated as a function of time and level change. The permanent threshold shift was also calculated by means of Robinson's method and ISO's method. The results indicate the noise exposure around Kadena Air Base was hazardous to hearing and is likely to have caused hearing loss to people living in its vicinity.
Space Station Furnace Facility. Volume 3: Program cost estimate
NASA Technical Reports Server (NTRS)
1992-01-01
The approach used to estimate costs for the Space Station Furnace Facility (SSFF) is based on a computer program developed internally at Teledyne Brown Engineering (TBE). The program produces time-phased estimates of cost elements for each hardware component, based on experience with similar components. Engineering estimates of the degree of similarity or difference between the current project and the historical data is then used to adjust the computer-produced cost estimate and to fit it to the current project Work Breakdown Structure (WBS). The SSFF Concept as presented at the Requirements Definition Review (RDR) was used as the base configuration for the cost estimate. This program incorporates data on costs of previous projects and the allocation of those costs to the components of one of three, time-phased, generic WBS's. Input consists of a list of similar components for which cost data exist, number of interfaces with their type and complexity, identification of the extent to which previous designs are applicable, and programmatic data concerning schedules and miscellaneous data (travel, off-site assignments). Output is program cost in labor hours and material dollars, for each component, broken down by generic WBS task and program schedule phase.
NASA Astrophysics Data System (ADS)
Liu, Xiliang; Lu, Feng; Zhang, Hengcai; Qiu, Peiyuan
2013-06-01
It is a pressing task to estimate the real-time travel time on road networks reliably in big cities, even though floating car data has been widely used to reflect the real traffic. Currently floating car data are mainly used to estimate the real-time traffic conditions on road segments, and has done little for turn delay estimation. However, turn delays on road intersections contribute significantly to the overall travel time on road networks in modern cities. In this paper, we present a technical framework to calculate the turn delays on road networks with float car data. First, the original floating car data collected with GPS equipped taxies was cleaned and matched to a street map with a distributed system based on Hadoop and MongoDB. Secondly, the refined trajectory data set was distributed among 96 time intervals (from 0: 00 to 23: 59). All of the intersections where the trajectories passed were connected with the trajectory segments, and constituted an experiment sample, while the intersections on arterial streets were specially selected to form another experiment sample. Thirdly, a principal curve-based algorithm was presented to estimate the turn delays at the given intersections. The algorithm argued is not only statistically fitted the real traffic conditions, but also is insensitive to data sparseness and missing data problems, which currently are almost inevitable with the widely used floating car data collecting technology. We adopted the floating car data collected from March to June in Beijing city in 2011, which contains more than 2.6 million trajectories generated from about 20000 GPS-equipped taxicabs and accounts for about 600 GB in data volume. The result shows the principal curve based algorithm we presented takes precedence over traditional methods, such as mean and median based approaches, and holds a higher estimation accuracy (about 10%-15% higher in RMSE), as well as reflecting the changing trend of traffic congestion. With the estimation result for the travel delay at intersections, we analyzed the spatio-temporal distribution of turn delays in three time scenarios (0: 00-0: 15, 8: 15-8: 30 and 12: 00-12: 15). It indicates that during one's single trip in Beijing, average 60% of the travel time on the road networks is wasted on the intersections, and this situation is even worse in daytime. Although the 400 main intersections take only 2.7% of all the intersections, they occupy about 18% travel time.
Why are You Late?: Investigating the Role of Time Management in Time-Based Prospective Memory
Waldum, Emily R; McDaniel, Mark A.
2016-01-01
Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., “hit the “z” key every 5 minutes”), many real-world TBPM tasks require more complex time-management processes. For instance to attend an appointment on time, one must estimate the duration of the drive to the appointment and then utilize this estimate to create and execute a secondary TBPM intention (e.g., “I need to start driving by 1:30 to make my 2:00 appointment on time”). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and further to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. PMID:27336325
Global distribution of carbon turnover times in terrestrial ecosystems
NASA Astrophysics Data System (ADS)
Carvalhais, Nuno; Forkel, Matthias; Khomik, Myroslava; Bellarby, Jessica; Jung, Martin; Migliavacca, Mirco; Mu, Mingquan; Saatchi, Sassan; Santoro, Maurizio; Thurner, Martin; Weber, Ulrich; Ahrens, Bernhard; Beer, Christian; Cescatti, Alessandro; Randerson, James T.; Reichstein, Markus
2015-04-01
The response of the carbon cycle in terrestrial ecosystems to climate variability remains one of the largest uncertainties affecting future projections of climate change. This feedback between the terrestrial carbon cycle and climate is partly determined by the response of carbon uptake and by changes in the residence time of carbon in land ecosystems, which depend on climate, soil, and vegetation type. Thus, it is of foremost importance to quantify the turnover times of carbon in terrestrial ecosystems and its spatial co-variability with climate. Here, we develop a global, spatially explicit and observation-based assessment of whole-ecosystem carbon turnover times (τ) to investigate its co-variation with climate at global scale. Assuming a balance between uptake (gross primary production, GPP) and emission fluxes, τ can be defined as the ratio between the total stock (C_total) and the output or input fluxes (GPP). The estimation of vegetation (C_veg) stocks relies on new remote sensing-based estimates from Saatchi et al (2011) and Thurner et al (2014), while soil carbon stocks (C_soil) are estimated based on state of the art global (Harmonized World Soil Database) and regional (Northern Circumpolar Soil Carbon Database) datasets. The uptake flux estimates are based on global observation-based fields of GPP (Jung et al., 2011). Globally, we find an overall mean global carbon turnover time of 23-4+7 years (95% confidence interval). A strong spatial variability globally is also observed, from shorter residence times in equatorial regions to longer periods at latitudes north of 75°N (mean τ of 15 and 255 years, respectively). The observed latitudinal pattern reflect the clear dependencies on temperature, showing increases from the equator to the poles, which is consistent with our current understanding of temperature controls on ecosystem dynamics. However, long turnover times are also observed in semi-arid and forest-herbaceous transition regions. Furthermore, based on a local correlation analysis, our results reveal a similarly strong association between τ and precipitation. A further analysis of carbon turnover times as simulated by state-of-the-art coupled climate carbon-cycle models from the CMIP5 experiments reveals wide variations between models and a tendency to underestimate the global τ by 36%. The latitudinal patterns correlate significantly with the observation-based patterns. However, the models show stronger associations between τ and temperature than the observation-based estimates. In general, the stronger relationship between τ and precipitation is not reproduced and the modeled turnover times are significantly faster in many semi-arid regions. Ultimately, these results suggest a strong role of the hydrological cycle in the carbon cycle-climate interactions, which is not currently reproduced by Earth system models.
Network structure and travel time perception.
Parthasarathi, Pavithra; Levinson, David; Hochmair, Hartwig
2013-01-01
The purpose of this research is to test the systematic variation in the perception of travel time among travelers and relate the variation to the underlying street network structure. Travel survey data from the Twin Cities metropolitan area (which includes the cities of Minneapolis and St. Paul) is used for the analysis. Travelers are classified into two groups based on the ratio of perceived and estimated commute travel time. The measures of network structure are estimated using the street network along the identified commute route. T-test comparisons are conducted to identify statistically significant differences in estimated network measures between the two traveler groups. The combined effect of these estimated network measures on travel time is then analyzed using regression models. The results from the t-test and regression analyses confirm the influence of the underlying network structure on the perception of travel time.
Kim, Chang-Sei; Carek, Andrew M.; Mukkamala, Ramakrishna; Inan, Omer T.; Hahn, Jin-Oh
2015-01-01
Goal We tested the hypothesis that the ballistocardiogram (BCG) waveform could yield a viable proximal timing reference for measuring pulse transit time (PTT). Methods From fifteen healthy volunteers, we measured PTT as the time interval between BCG and a non-invasively measured finger blood pressure (BP) waveform. To evaluate the efficacy of the BCG-based PTT in estimating BP, we likewise measured pulse arrival time (PAT) using the electrocardiogram (ECG) as proximal timing reference and compared their correlations to BP. Results BCG-based PTT was correlated with BP reasonably well: the mean correlation coefficient (r) was 0.62 for diastolic (DP), 0.65 for mean (MP) and 0.66 for systolic (SP) pressures when the intersecting tangent method was used as distal timing reference. Comparing four distal timing references (intersecting tangent, maximum second derivative, diastolic minimum and systolic maximum), PTT exhibited the best correlation with BP when the systolic maximum method was used (mean r value was 0.66 for DP, 0.67 for MP and 0.70 for SP). PTT was more strongly correlated with DP than PAT regardless of the distal timing reference: mean r value was 0.62 versus 0.51 (p=0.07) for intersecting tangent, 0.54 versus 0.49 (p=0.17) for maximum second derivative, 0.58 versus 0.52 (p=0.37) for diastolic minimum, and 0.66 versus 0.60 (p=0.10) for systolic maximum methods. The difference between PTT and PAT in estimating DP was significant (p=0.01) when the r values associated with all the distal timing references were compared altogether. However, PAT appeared to outperform PTT in estimating SP (p=0.31 when the r values associated with all the distal timing references were compared altogether). Conclusion We conclude that BCG is an adequate proximal timing reference in deriving PTT, and that BCG-based PTT may be superior to ECG-based PAT in estimating DP. Significance PTT with BCG as proximal timing reference has potential to enable convenient and ubiquitous cuffless BP monitoring. PMID:26054058
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis.
Liu, Jeng-Cheng; Cheng, Yuang-Tung; Hung, Hsien-Sen
2018-01-19
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method.
Real-Time Rotational Activity Detection in Atrial Fibrillation
Ríos-Muñoz, Gonzalo R.; Arenal, Ángel; Artés-Rodríguez, Antonio
2018-01-01
Rotational activations, or spiral waves, are one of the proposed mechanisms for atrial fibrillation (AF) maintenance. We present a system for assessing the presence of rotational activity from intracardiac electrograms (EGMs). Our system is able to operate in real-time with multi-electrode catheters of different topologies in contact with the atrial wall, and it is based on new local activation time (LAT) estimation and rotational activity detection methods. The EGM LAT estimation method is based on the identification of the highest sustained negative slope of unipolar signals. The method is implemented as a linear filter whose output is interpolated on a regular grid to match any catheter topology. Its operation is illustrated on selected signals and compared to the classical Hilbert-Transform-based phase analysis. After the estimation of the LAT on the regular grid, the detection of rotational activity in the atrium is done by a novel method based on the optical flow of the wavefront dynamics, and a rotation pattern match. The methods have been validated using in silico and real AF signals. PMID:29593566
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
Blumberg, Linda J.; Garrett, Bowen; Holahan, John
2016-01-01
Time lags in receiving data from long-standing, large federal surveys complicate real-time estimation of the coverage effects of full Affordable Care Act (ACA) implementation. Fast-turnaround household surveys fill some of the void in data on recent changes to insurance coverage, but they lack the historical data that allow analysts to account for trends that predate the ACA, economic fluctuations, and earlier public program expansions when predicting how many people would be uninsured without comprehensive health care reform. Using data from the Current Population Survey (CPS) from 2000 to 2012 and the Health Reform Monitoring Survey (HRMS) data for 2013 and 2015, this article develops an approach to estimate the number of people who would be uninsured in the absence of the ACA and isolates the change in coverage as of March 2015 that can be attributed to the ACA. We produce counterfactual forecasts of the number of uninsured absent the ACA for 9 age-income groups and compare these estimates with 2015 estimates based on HRMS relative coverage changes applied to CPS-based population estimates. As of March 2015, we find the ACA has reduced the number of uninsured adults by 18.1 million compared with the number who would have been uninsured at that time had the law not been implemented. That decline represents a 46% reduction in the number of nonelderly adults without insurance. The approach developed here can be applied to other federal data and timely surveys to provide a range of estimates of the overall effects of reform. PMID:27076474
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Analysis And Augmentation Of Timing Advance Based Geolocation In Lte Cellular Networks
2016-12-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA DISSERTATION ANALYSIS AND AUGMENTATION OF TIMING ADVANCE-BASED GEOLOCATION IN LTE CELLULAR NETWORKS by...estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the...AND SUBTITLE ANALYSIS AND AUGMENTATION OF TIMING ADVANCE-BASED GEOLOCA- TION IN LTE CELLULAR NETWORKS 5. FUNDING NUMBERS 6. AUTHOR(S) John D. Roth 7
HIITE: HIV-1 incidence and infection time estimator.
Park, Sung Yong; Love, Tanzy M T; Kapoor, Shivankur; Lee, Ha Youn
2018-06-15
Around 2.1 million new HIV-1 infections were reported in 2015, alerting that the HIV-1 epidemic remains a significant global health challenge. Precise incidence assessment strengthens epidemic monitoring efforts and guides strategy optimization for prevention programs. Estimating the onset time of HIV-1 infection can facilitate optimal clinical management and identify key populations largely responsible for epidemic spread and thereby infer HIV-1 transmission chains. Our goal is to develop a genomic assay estimating the incidence and infection time in a single cross-sectional survey setting. We created a web-based platform, HIV-1 incidence and infection time estimator (HIITE), which processes envelope gene sequences using hierarchical clustering algorithms and informs the stage of infection, along with time since infection for incident cases. HIITE's performance was evaluated using 585 incident and 305 chronic specimens' envelope gene sequences collected from global cohorts including HIV-1 vaccine trial participants. HIITE precisely identified chronically infected individuals as being chronic with an error less than 1% and correctly classified 94% of recently infected individuals as being incident. Using a mixed-effect model, an incident specimen's time since infection was estimated from its single lineage diversity, showing 14% prediction error for time since infection. HIITE is the first algorithm to inform two key metrics from a single time point sequence sample. HIITE has the capacity for assessing not only population-level epidemic spread but also individual-level transmission events from a single survey, advancing HIV prevention and intervention programs. Web-based HIITE and source code of HIITE are available at http://www.hayounlee.org/software.html. Supplementary data are available at Bioinformatics online.
Xing, Yan; Chang, George J; Hu, Chung-Yuan; Askew, Robert L; Ross, Merrick I; Gershenwald, Jeffrey E; Lee, Jeffrey E; Mansfield, Paul F; Lucci, Anthony; Cormier, Janice N
2010-05-01
Conditional survival (CS) has emerged as a clinically relevant measure of prognosis for cancer survivors. The objective of this analysis was to provide melanoma-specific CS estimates to help clinicians promote more informed patient decision making. Patients with melanoma and at least 5 years of follow-up were identified from the Surveillance Epidemiology and End Results registry (1988-2000). By using the methods of Kaplan and Meier, stage-specific, 5-year CS estimates were independently calculated for survivors for each year after diagnosis. Stage-specific multivariate Cox regression models including baseline survivor functions were used to calculate adjusted melanoma-specific CS for different subgroups of patients further stratified by age, gender, race, marital status, anatomic tumor location, and tumor histology. Five-year CS estimates for patients with stage I disease remained constant at 97% annually, while for patients with stages II, III, and IV disease, 5-year CS estimates from time 0 (diagnosis) to 5 years improved from 72% to 86%, 51% to 87%, and 19% to 84%, respectively. Multivariate CS analysis revealed that differences in stages II through IV CS based on age, gender, and race decreased over time. Five-year melanoma-specific CS estimates improve dramatically over time for survivors with advanced stages of disease. These prognostic data are critical to patients for both treatment and nontreatment related life decisions. (c) 2010 American Cancer Society.
Bender, R W; Cook, D E; Combs, D K
2016-07-01
Ruminal digestion of neutral detergent fiber (NDF) is affected in part by the proportion of NDF that is indigestible (iNDF), and the rate at which the potentially digestible NDF (pdNDF) is digested. Indigestible NDF in forages is commonly determined as the NDF residue remaining after long-term in situ or in vitro incubations. Rate of pdNDF digestion can be determined by measuring the degradation of NDF in ruminal in vitro or in situ incubations at multiple time points, and fitting the change in residual pdNDF by time with log-transformed linear first order or nonlinear mathematical treatments. The estimate of indigestible fiber is important because it sets the pool size of potentially digestible fiber, which in turn affects the estimate of the proportion of potentially digestible fiber remaining in the time series analysis. Our objective was to compare estimates of iNDF based on in vitro (IV) and in situ (IS) measurements at 2 fermentation end points (120 and 288h). Further objectives were to compare the subsequent rate, lag, and estimated total-tract NDF digestibility (TTNDFD) when iNDF from each method was used with a 7 time point in vitro incubation of NDF to model fiber digestion. Thirteen corn silage samples were dried and ground through a 1-mm screen in a Wiley mill. A 2×2 factorial trial was conducted to determine the effect of time of incubation and method of iNDF analysis on iNDF concentration; the 2 factors were method of iNDF analysis (IS vs. IV) and incubation time (120 vs. 288h). Four sample replicates were used, and approximately 0.5g/sample was weighed into each Ankom F 0285 bag (Ankom Technology, Macedon, NY; pore size=25 µm) for all techniques. The IV-120 had a higher estimate of iNDF (37.8% of NDF) than IS-120 (32.1% of NDF), IV-288 (31.2% of NDF), or IS-288 technique (25.7% of NDF). Each of the estimates of iNDF was then used to calculate the rate of degradation of pdNDF from a 7 time point in vitro incubation. When the IV-120 NDF residue was used, the subsequent rates of pdNDF digestion were fastest (2.8% h(-1)) but the estimate of lag was longest (10.3h), compared with when iNDF was based on the IS-120 or IV-288 NDF residues (rates of 2.3%h(-1) and 2.4%h(-1); lag times of 9.7 and 9.8 h, respectively). Rate of pdNDF degradation was slowest (2.1% h(-1)) when IS-288 NDF residue was used as the estimate of iNDF. The estimate of lag based on IS-288 (9.4h) was similar to lag estimates calculated when IS-120 or IV-288 were used as the estimate of iNDF. The TTNDFD estimates did not differ between treatments (35.5%), however, because differences in estimated pools of iNDF resulted in subsequent changes in rates and lag times of fiber digestion that tended to cancel out. Estimates of fiber digestion kinetic parameters and TTNDFD were similar when fit to either the linear or nonlinear fiber degradation models. All techniques also yielded estimates of iNDF that were higher than predicted iNDF based on the commonly used ratio of 2.4 × lignin. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Real-time estimation of ionospheric delay using GPS measurements
NASA Astrophysics Data System (ADS)
Lin, Lao-Sheng
1997-12-01
When radio waves such as the GPS signals propagate through the ionosphere, they experience an extra time delay. The ionospheric delay can be eliminated (to the first order) through a linear combination of L1 and L2 observations from dual-frequency GPS receivers. Taking advantage of this dispersive principle, one or more dual- frequency GPS receivers can be used to determine a model of the ionospheric delay across a region of interest and, if implemented in real-time, can support single-frequency GPS positioning and navigation applications. The research objectives of this thesis were: (1) to develop algorithms to obtain accurate absolute Total Electron Content (TEC) estimates from dual-frequency GPS observables, and (2) to develop an algorithm to improve the accuracy of real-time ionosphere modelling. In order to fulfil these objectives, four algorithms have been proposed in this thesis. A 'multi-day multipath template technique' is proposed to mitigate the pseudo-range multipath effects at static GPS reference stations. This technique is based on the assumption that the multipath disturbance at a static station will be constant if the physical environment remains unchanged from day to day. The multipath template, either single-day or multi-day, can be generated from the previous days' GPS data. A 'real-time failure detection and repair algorithm' is proposed to detect and repair the GPS carrier phase 'failures', such as the occurrence of cycle slips. The proposed algorithm uses two procedures: (1) application of a statistical test on the state difference estimated from robust and conventional Kalman filters in order to detect and identify the carrier phase failure, and (2) application of a Kalman filter algorithm to repair the 'identified carrier phase failure'. A 'L1/L2 differential delay estimation algorithm' is proposed to estimate GPS satellite transmitter and receiver L1/L2 differential delays. This algorithm, based on the single-site modelling technique, is able to estimate the sum of the satellite and receiver L1/L2 differential delay for each tracked GPS satellite. A 'UNSW grid-based algorithm' is proposed to improve the accuracy of real-time ionosphere modelling. The proposed algorithm is similar to the conventional grid-based algorithm. However, two modifications were made to the algorithm: (1) an 'exponential function' is adopted as the weighting function, and (2) the 'grid-based ionosphere model' estimated from the previous day is used to predict the ionospheric delay ratios between the grid point and reference points. (Abstract shortened by UMI.)
2007-06-01
Chin Khoon Quek. “Vision Based Control and Target Range Estimation for Small Unmanned Aerial Vehicle.” Master’s Thesis, Naval Postgraduate School...December 2005. [6] Kwee Chye Yap. “Incorporating Target Mensuration System for Target Motion Estimation Along a Road Using Asynchronous Filter
Time difference of arrival estimation of microseismic signals based on alpha-stable distribution
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming
2018-05-01
Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.
Least-dependent-component analysis based on mutual information
NASA Astrophysics Data System (ADS)
Stögbauer, Harald; Kraskov, Alexander; Astakhov, Sergey A.; Grassberger, Peter
2004-12-01
We propose to use precise estimators of mutual information (MI) to find the least dependent components in a linearly mixed signal. On the one hand, this seems to lead to better blind source separation than with any other presently available algorithm. On the other hand, it has the advantage, compared to other implementations of “independent” component analysis (ICA), some of which are based on crude approximations for MI, that the numerical values of the MI can be used for (i) estimating residual dependencies between the output components; (ii) estimating the reliability of the output by comparing the pairwise MIs with those of remixed components; and (iii) clustering the output according to the residual interdependencies. For the MI estimator, we use a recently proposed k -nearest-neighbor-based algorithm. For time sequences, we combine this with delay embedding, in order to take into account nontrivial time correlations. After several tests with artificial data, we apply the resulting MILCA (mutual-information-based least dependent component analysis) algorithm to a real-world dataset, the ECG of a pregnant woman.
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X.; Li, Huailiang; Shi, Rui
2017-01-01
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring. PMID:28471418
Shi, Fanrong; Tuo, Xianguo; Yang, Simon X; Li, Huailiang; Shi, Rui
2017-05-04
Wireless sensor networks (WSNs) have been widely used to collect valuable information in Structural Health Monitoring (SHM) of bridges, using various sensors, such as temperature, vibration and strain sensors. Since multiple sensors are distributed on the bridge, accurate time synchronization is very important for multi-sensor data fusion and information processing. Based on shape of the bridge, a spanning tree is employed to build linear topology WSNs and achieve time synchronization in this paper. Two-way time message exchange (TTME) and maximum likelihood estimation (MLE) are employed for clock offset estimation. Multiple TTMEs are proposed to obtain a subset of TTME observations. The time out restriction and retry mechanism are employed to avoid the estimation errors that are caused by continuous clock offset and software latencies. The simulation results show that the proposed algorithm could avoid the estimation errors caused by clock drift and minimize the estimation error due to the large random variable delay jitter. The proposed algorithm is an accurate and low complexity time synchronization algorithm for bridge health monitoring.
Pearson correlation estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.
NASA Astrophysics Data System (ADS)
Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis
2017-08-01
The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Wei, Ying; Zeng, Xiangye; Lu, Jia; Zhang, Shuangxi; Wang, Mengjun
2018-03-01
A joint timing and frequency synchronization method has been proposed for coherent optical orthogonal frequency-division multiplexing (CO-OFDM) system in this paper. The timing offset (TO), integer frequency offset (FO) and the fractional FO can be realized by only one training symbol, which consists of two linear frequency modulation (LFM) signals with opposite chirp rates. By detecting the peak of LFM signals after Radon-Wigner transform (RWT), the TO and the integer FO can be estimated at the same time, moreover, the fractional FO can be acquired correspondingly through the self-correlation characteristic of the same training symbol. Simulation results show that the proposed method can give a more accurate TO estimation than the existing methods, especially at poor OSNR conditions; for the FO estimation, both the fractional and the integer FO can be estimated through the proposed training symbol with no extra overhead, a more accurate estimation and a large FO estimation range of [ - 5 GHz, 5GHz] can be acquired.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D
2018-05-01
Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.
Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis
2009-02-01
Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
NASA Astrophysics Data System (ADS)
Garcin, Matthieu
2017-10-01
Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-dependent Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
Comparison of methods for estimating the attributable risk in the context of survival analysis.
Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M
2017-01-23
The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.
Baygin, Mehmet; Karakose, Mehmet
2013-01-01
Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433
Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.
Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan
2016-09-24
This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.
The use of models for estimating emissions from products beyond the timeframe of an emissions test is a means of managing the time and expenses associated with product emissions certification. This paper presents a discussion of (1) the impact of uncertainty in test chamber emiss...
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
NASA Astrophysics Data System (ADS)
Brown, Robert Douglas
Several components of a system for quantitative application of climatic statistics to landscape planning and design (CLIMACS) have been developed. One component model (MICROSIM) estimated the microclimate at the top of a remote crop using physically-based models and inputs of weather station data. Temperatures at the top of unstressed, uniform crops on flat terrain within 1600 m of a recording weather station were estimated within 1.0 C 96% of the time for a corn crop and 92% of the time for a soybean crop. Crop top winds were estimated within 0.4 m/s 92% of the time for corn and 100% of the time for soybean. This is of sufficient accuracy for application in landscape planning and design models. A physically-based model (COMFA) was developed for the determination of outdoor human thermal comfort from microclimate inputs. Estimated versus measured comfort levels in a wide range of environments agreed with a correlation coefficient of r = 0.91. Using these components, the CLIMACS concept has been applied to a typical planning example. Microclimate data were generated from weather station information using MICROSIM, then input to COMFA and to a house energy consumption model called HOTCAN to derive quantitative climatic justification for design decisions.
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
White, Edward W; Lumley, Thomas; Goodreau, Steven M; Goldbaum, Gary; Hawes, Stephen E
2010-12-01
To produce valid seroincidence estimates, the serological testing algorithm for recent HIV seroconversion (STARHS) assumes independence between infection and testing, which may be absent in clinical data. STARHS estimates are generally greater than cohort-based estimates of incidence from observable person-time and diagnosis dates. The authors constructed a series of partial stochastic models to examine whether testing motivated by suspicion of infection could bias STARHS. One thousand Monte Carlo simulations of 10,000 men who have sex with men were generated using parameters for HIV incidence and testing frequency from data from a clinical testing population in Seattle. In one set of simulations, infection and testing dates were independent. In another set, some intertest intervals were abbreviated to reflect the distribution of intervals between suspected HIV exposure and testing in a group of Seattle men who have sex with men recently diagnosed as having HIV. Both estimation methods were applied to the simulated datasets. Both cohort-based and STARHS incidence estimates were calculated using the simulated data and compared with previously calculated, empirical cohort-based and STARHS seroincidence estimates from the clinical testing population. Under simulated independence between infection and testing, cohort-based and STARHS incidence estimates resembled cohort estimates from the clinical dataset. Under simulated motivated testing, cohort-based estimates remained unchanged, but STARHS estimates were inflated similar to empirical STARHS estimates. Varying motivation parameters appreciably affected STARHS incidence estimates, but not cohort-based estimates. Cohort-based incidence estimates are robust against dependence between testing and acquisition of infection, whereas STARHS incidence estimates are not.
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
Turner, Alan H; Pritchard, Adam C; Matzke, Nicholas J
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a 'smoothed' timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches.
Turner, Alan H.; Pritchard, Adam C.; Matzke, Nicholas J.
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a ‘smoothed’ timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches. PMID:28187191
A new, long-term daily satellite-based rainfall dataset for operational monitoring in Africa
Maidment, Ross I.; Grimes, David; Black, Emily; Tarnavsky, Elena; Young, Matthew; Greatrex, Helen; Allan, Richard P.; Stein, Thorwald; Nkonde, Edson; Senkunda, Samuel; Alcántara, Edgar Misael Uribe
2017-01-01
Rainfall information is essential for many applications in developing countries, and yet, continually updated information at fine temporal and spatial scales is lacking. In Africa, rainfall monitoring is particularly important given the close relationship between climate and livelihoods. To address this information gap, this paper describes two versions (v2.0 and v3.0) of the TAMSAT daily rainfall dataset based on high-resolution thermal-infrared observations, available from 1983 to the present. The datasets are based on the disaggregation of 10-day (v2.0) and 5-day (v3.0) total TAMSAT rainfall estimates to a daily time-step using daily cold cloud duration. This approach provides temporally consistent historic and near-real time daily rainfall information for all of Africa. The estimates have been evaluated using ground-based observations from five countries with contrasting rainfall climates (Mozambique, Niger, Nigeria, Uganda, and Zambia) and compared to other satellite-based rainfall estimates. The results indicate that both versions of the TAMSAT daily estimates reliably detects rainy days, but have less skill in capturing rainfall amount—results that are comparable to the other datasets. PMID:28534868
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Indirect rotor position sensing in real time for brushless permanent magnet motor drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ertugrul, N.; Acarnley, P.P.
1998-07-01
This paper describes a modern solution to real-time rotor position estimation of brushless permanent magnet (PM) motor drives. The position estimation scheme, based on flux linkage and line-current estimation, is implemented in real time by using the abc reference frame, and it is tested dynamically. The position estimation model of the test motor, development of hardware, and basic operation of the digital signal processor (DSP) are discussed. The overall position estimation strategy is accomplished with a fast DSP (TMS320C30). The method is a shaft position sensorless method that is applicable to a wide range of excitation types in brushless PMmore » motors without any restriction on the motor model and the current excitation. Both rectangular and sinewave-excited brushless PM motor drives are examined, and the results are given to demonstrate the effectiveness of the method with dynamic loads in closed estimated position loop.« less
Estimating survival rates with time series of standing age‐structure data
Udevitz, Mark S.; Gogan, Peter J.
2012-01-01
It has long been recognized that age‐structure data contain useful information for assessing the status and dynamics of wildlife populations. For example, age‐specific survival rates can be estimated with just a single sample from the age distribution of a stable, stationary population. For a population that is not stable, age‐specific survival rates can be estimated using techniques such as inverse methods that combine time series of age‐structure data with other demographic data. However, estimation of survival rates using these methods typically requires numerical optimization, a relatively long time series of data, and smoothing or other constraints to provide useful estimates. We developed general models for possibly unstable populations that combine time series of age‐structure data with other demographic data to provide explicit maximum likelihood estimators of age‐specific survival rates with as few as two years of data. As an example, we applied these methods to estimate survival rates for female bison (Bison bison) in Yellowstone National Park, USA. This approach provides a simple tool for monitoring survival rates based on age‐structure data.
Head movement compensation in real-time magnetoencephalographic recordings.
Little, Graham; Boe, Shaun; Bardouille, Timothy
2014-01-01
Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
NASA Astrophysics Data System (ADS)
Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.
2016-07-01
We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.
NASA Astrophysics Data System (ADS)
Caprio, M.; Cua, G. B.; Wiemer, S.; Fischer, M.; Heaton, T. H.; CISN EEW Team
2011-12-01
The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system being tested in real-time in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network (SCSN) since July 2008, and at the Northern California Seismic Network (NCSN) since February 2009. With the aim of improving the convergence of real-time VS magnitude estimates to network magnitudes, we evaluate various empirical and Vs30-based approaches to accounting for site amplification. Empirical station corrections for SCSN stations are derived from M>3.0 events from 2005 through 2009. We evaluate the performance of the various approaches using an independent 2010 dataset. In addition, we analyze real-time VS performance from 2008 to the present to quantify the time and spatial dependence of VS uncertainty estimates. We also summarize real-time VS performance for significant 2011 events in California. Improved magnitude and uncertainty estimates potentially increase the utility of EEW information for end-users, particularly those intending to automate damage-mitigating actions based on real-time information.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Marvis, Dimitri N.
2014-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Robust Modal Filtering and Control of the X-56A Model with Simulated Fiber Optic Sensor Failures
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2016-01-01
The X-56A aircraft is a remotely-piloted aircraft with flutter modes intentionally designed into the flight envelope. The X-56A program must demonstrate flight control while suppressing all unstable modes. A previous X-56A model study demonstrated a distributed-sensing-based active shape and active flutter suppression controller. The controller relies on an estimator which is sensitive to bias. This estimator is improved herein, and a real-time robust estimator is derived and demonstrated on 1530 fiber optic sensors. It is shown in simulation that the estimator can simultaneously reject 230 worst-case fiber optic sensor failures automatically. These sensor failures include locations with high leverage (or importance). To reduce the impact of leverage outliers, concentration based on a Mahalanobis trim criterion is introduced. A redescending M-estimator with Tukey bisquare weights is used to improve location and dispersion estimates within each concentration step in the presence of asymmetry (or leverage). A dynamic simulation is used to compare the concentrated robust estimator to a state-of-the-art real-time robust multivariate estimator. The estimators support a previously-derived mu-optimal shape controller. It is found that during the failure scenario, the concentrated modal estimator keeps the system stable.
Regularity of a renewal process estimated from binary data.
Rice, John D; Strawderman, Robert L; Johnson, Brent A
2017-10-09
Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.
A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.
Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye
2017-11-01
Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig
2014-01-01
Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736
Why are you late? Investigating the role of time management in time-based prospective memory.
Waldum, Emily R; McDaniel, Mark A
2016-08-01
Time-based prospective memory tasks (TBPM) are those that are to be performed at a specific future time. Contrary to typical laboratory TBPM tasks (e.g., hit the Z key every 5 min), many real-world TBPM tasks require more complex time-management processes. For instance, to attend an appointment on time, one must estimate the duration of the drive to the appointment and then use this estimate to create and execute a secondary TBPM intention (e.g., "I need to start driving by 1:30 to make my 2:00 appointment on time"). Future under- and overestimates of drive time can lead to inefficient TBPM performance with the former lending to missed appointments and the latter to long stints in the waiting room. Despite the common occurrence of complex TBPM tasks in everyday life, to date, no studies have investigated how components of time management, including time estimation, affect behavior in such complex TBPM tasks. Therefore, the current study aimed to investigate timing biases in both older and younger adults and, further, to determine how such biases along with additional time management components including planning and plan fidelity influence complex TBPM performance. Results suggest for the first time that younger and older adults do not always utilize similar timing strategies, and as a result, can produce differential timing biases under the exact same environmental conditions. These timing biases, in turn, play a vital role in how efficiently both younger and older adults perform a later TBPM task that requires them to utilize their earlier time estimate. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
Clock Synchronization Through Time-Variant Underwater Acoustic Channels
2012-09-01
stage, we analyze a series of chirp responses to identify the least time -varying multipath present in the channel between the two nodes. Based on the... based on the detected arrivals and determines the most stable one based on the correlation coefficient of a model fit to the time -of-arrival estimates...short periods of time . Nevertheless, signal fluctuations can occur due to transceiver motion or inherent changes within the propagation medium
Network Structure and Travel Time Perception
Parthasarathi, Pavithra; Levinson, David; Hochmair, Hartwig
2013-01-01
The purpose of this research is to test the systematic variation in the perception of travel time among travelers and relate the variation to the underlying street network structure. Travel survey data from the Twin Cities metropolitan area (which includes the cities of Minneapolis and St. Paul) is used for the analysis. Travelers are classified into two groups based on the ratio of perceived and estimated commute travel time. The measures of network structure are estimated using the street network along the identified commute route. T-test comparisons are conducted to identify statistically significant differences in estimated network measures between the two traveler groups. The combined effect of these estimated network measures on travel time is then analyzed using regression models. The results from the t-test and regression analyses confirm the influence of the underlying network structure on the perception of travel time. PMID:24204932
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Spatio-temporal population estimates for risk management
NASA Astrophysics Data System (ADS)
Cockings, Samantha; Martin, David; Smith, Alan; Martin, Rebecca
2013-04-01
Accurate estimation of population at risk from hazards and effective emergency management of events require not just appropriate spatio-temporal modelling of hazards but also of population. While much recent effort has been focused on improving the modelling and predictions of hazards (both natural and anthropogenic), there has been little parallel advance in the measurement or modelling of population statistics. Different hazard types occur over diverse temporal cycles, are of varying duration and differ significantly in their spatial extent. Even events of the same hazard type, such as flood events, vary markedly in their spatial and temporal characteristics. Conceptually and pragmatically then, population estimates should also be available for similarly varying spatio-temporal scales. Routine population statistics derived from traditional censuses or surveys are usually static representations in both space and time, recording people at their place of usual residence on census/survey night and presenting data for administratively defined areas. Such representations effectively fix the scale of population estimates in both space and time, which is unhelpful for meaningful risk management. Over recent years, the Pop24/7 programme of research, based at the University of Southampton (UK), has developed a framework for spatio-temporal modelling of population, based on gridded population surfaces. Based on a data model which is fully flexible in terms of space and time, the framework allows population estimates to be produced for any time slice relevant to the data contained in the model. It is based around a set of origin and destination centroids, which have capacities, spatial extents and catchment areas, all of which can vary temporally, such as by time of day, day of week, season. A background layer, containing information on features such as transport networks and landuse, provides information on the likelihood of people being in certain places at specific times. Unusual patterns associated with special events can also be modelled and the framework is fully volume preserving. Outputs from the model are gridded population surfaces for the specified time slice, either for total population or by sub-groups (e.g. age). Software to implement the models (SurfaceBuilder247) has been developed and pre-processed layers for typical time slices for England and Wales in 2001 and 2006 are available for UK academic purposes. The outputs and modelling framework from the Pop24/7 programme provide significant opportunities for risk management applications. For estimates of mid- to long-term cumulative population exposure to hazards, such as in flood risk mapping, populations can be produced for numerous time slices and integrated with flood models. For applications in emergency response/ management, time-specific population models can be used as seeds for agent-based models or other response/behaviour models. Estimates for sub-groups of the population also permit exploration of vulnerability through space and time. This paper outlines the requirements for effective spatio-temporal population models for risk management. It then describes the Pop24/7 framework and illustrates its potential for risk management through presentation of examples from natural and anthropogenic hazard applications. The paper concludes by highlighting key challenges for future research in this area.
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S
2018-06-01
In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.
Tracking the time-varying cortical connectivity patterns by adaptive multivariate estimators.
Astolfi, L; Cincotti, F; Mattia, D; De Vico Fallani, F; Tocci, A; Colosimo, A; Salinari, S; Marciani, M G; Hesse, W; Witte, H; Ursino, M; Zavaglia, M; Babiloni, F
2008-03-01
The directed transfer function (DTF) and the partial directed coherence (PDC) are frequency-domain estimators that are able to describe interactions between cortical areas in terms of the concept of Granger causality. However, the classical estimation of these methods is based on the multivariate autoregressive modelling (MVAR) of time series, which requires the stationarity of the signals. In this way, transient pathways of information transfer remains hidden. The objective of this study is to test a time-varying multivariate method for the estimation of rapidly changing connectivity relationships between cortical areas of the human brain, based on DTF/PDC and on the use of adaptive MVAR modelling (AMVAR) and to apply it to a set of real high resolution EEG data. This approach will allow the observation of rapidly changing influences between the cortical areas during the execution of a task. The simulation results indicated that time-varying DTF and PDC are able to estimate correctly the imposed connectivity patterns under reasonable operative conditions of signal-to-noise ratio (SNR) ad number of trials. An SNR of five and a number of trials of at least 20 provide a good accuracy in the estimation. After testing the method by the simulation study, we provide an application to the cortical estimations obtained from high resolution EEG data recorded from a group of healthy subject during a combined foot-lips movement and present the time-varying connectivity patterns resulting from the application of both DTF and PDC. Two different cortical networks were detected with the proposed methods, one constant across the task and the other evolving during the preparation of the joint movement.
Lee, Joonnyong; Sohn, JangJay; Park, Jonghyun; Yang, SeungMan; Lee, Saram; Kim, Hee Chan
2018-06-18
Non-invasive continuous blood pressure monitors are of great interest to the medical community due to their value in hypertension management. Recently, studies have shown the potential of pulse pressure as a therapeutic target for hypertension, but not enough attention has been given to non-invasive continuous monitoring of pulse pressure. Although accurate pulse pressure estimation can be of direct value to hypertension management and indirectly to the estimation of systolic blood pressure, as it is the sum of pulse pressure and diastolic blood pressure, only a few inadequate methods of pulse pressure estimation have been proposed. We present a novel, non-invasive blood pressure and pulse pressure estimation method based on pulse transit time and pre-ejection period. Pre-ejection period and pulse transit time were measured non-invasively using electrocardiogram, seismocardiogram, and photoplethysmogram measured from the torso. The proposed method used the 2-element Windkessel model to model pulse pressure with the ratio of stroke volume, approximated by pre-ejection period, and arterial compliance, estimated by pulse transit time. Diastolic blood pressure was estimated using pulse transit time, and systolic blood pressure was estimated as the sum of the two estimates. The estimation method was verified in 11 subjects in two separate conditions with induced cardiovascular response and the results were compared against a reference measurement and values obtained from a previously proposed method. The proposed method yielded high agreement with the reference (pulse pressure correlation with reference R ≥ 0.927, diastolic blood pressure correlation with reference R ≥ 0.854, systolic blood pressure correlation with reference R ≥ 0.914) and high estimation accuracy in pulse pressure (mean root-mean-squared error ≤ 3.46 mmHg) and blood pressure (mean root-mean-squared error ≤ 6.31 mmHg for diastolic blood pressure and ≤ 8.41 mmHg for systolic blood pressure) over a wide range of hemodynamic changes. The proposed pulse pressure estimation method provides accurate estimates in situations with and without significant changes in stroke volume. The proposed method improves upon the currently available systolic blood pressure estimation methods by providing accurate pulse pressure estimates.
Hughes, Richard E; Nelson, Nancy A
2009-05-01
A mathematical model was developed for estimating the net present value (NPV) of the cash flow resulting from an investment in an intervention to prevent occupational low back pain (LBP). It combines biomechanics, epidemiology, and finance to give an integrated tool for a firm to use to estimate the investment worthiness of an intervention based on a biomechanical analysis of working postures and hand loads. The model can be used by an ergonomist to estimate the investment worthiness of a proposed intervention. The analysis would begin with a biomechanical evaluation of the current job design and post-intervention job. Economic factors such as hourly labor cost, overhead, workers' compensation costs of LBP claims, and discount rate are combined with the biomechanical analysis to estimate the investment worthiness of the proposed intervention. While this model is limited to low back pain, the simulation framework could be applied to other musculoskeletal disorders. The model uses Monte Carlo simulation to compute the statistical distribution of NPV, and it uses a discrete event simulation paradigm based on four states: (1) working and no history of lost time due to LBP, (2) working and history of lost time due to LBP, (3) lost time due to LBP, and (4) leave job. Probabilities of transitions are based on an extensive review of the epidemiologic review of the low back pain literature. An example is presented.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Lin, Chi-Yueh; Wang, Hsiao-Chuan
2011-07-01
The voice onset time (VOT) of a stop consonant is the interval between its burst onset and voicing onset. Among a variety of research topics on VOT, one that has been studied for years is how VOTs are efficiently measured. Manual annotation is a feasible way, but it becomes a time-consuming task when the corpus size is large. This paper proposes an automatic VOT estimation method based on an onset detection algorithm. At first, a forced alignment is applied to identify the locations of stop consonants. Then a random forest based onset detector searches each stop segment for its burst and voicing onsets to estimate a VOT. The proposed onset detection can detect the onsets in an efficient and accurate manner with only a small amount of training data. The evaluation data extracted from the TIMIT corpus were 2344 words with a word-initial stop. The experimental results showed that 83.4% of the estimations deviate less than 10 ms from their manually labeled values, and 96.5% of the estimations deviate by less than 20 ms. Some factors that influence the proposed estimation method, such as place of articulation, voicing of a stop consonant, and quality of succeeding vowel, were also investigated. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Cheek, Kim A.
2017-08-01
Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.
A Role for Memory in Prospective Timing informs Timing in Prospective Memory
Waldum, Emily R; Sahakyan, Lili
2014-01-01
Time-based prospective memory (TBPM) tasks require the estimation of time in passing – known as prospective timing. Prospective timing is said to depend on an attentionally-driven internal clock mechanism, and is thought to be unaffected by memory for interval information (for reviews see, Block, Hancock, & Zakay, 2010; Block & Zakay, 1997). A prospective timing task that required a verbal estimate following the entire interval (Experiment 1) and a TBPM task that required production of a target response during the interval (Experiment 2) were used to test an alternative view that episodic memory does influence prospective timing. In both experiments, participants performed an ongoing lexical decision task of fixed duration while a varying number of songs were played in the background. Experiment 1 results revealed that verbal time estimates became longer the more songs participants remembered from the interval, suggesting that memory for interval information influences prospective time estimates. In Experiment 2, participants who were asked to perform the TBPM task without the aid of an external clock made their target responses earlier as the number of songs increased, indicating that prospective estimates of elapsed time increased as more songs were experienced. For participants who had access to a clock, changes in clock-checking coincided with the occurrence of song boundaries, indicating that participants used both song information and clock information to estimate time. Finally, ongoing task performance and verbal reports in both experiments further substantiate a role for episodic memory in prospective timing. PMID:22984950
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Online frequency estimation with applications to engine and generator sets
NASA Astrophysics Data System (ADS)
Manngård, Mikael; Böling, Jari M.
2017-07-01
Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.
Super-Resolution Algorithm in Cumulative Virtual Blanking
NASA Astrophysics Data System (ADS)
Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.
2008-11-01
The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.
Distributed estimation for adaptive sensor selection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Hassan Hamid, Matasm M.
2014-05-01
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.
NASA Astrophysics Data System (ADS)
Paiva, Rodrigo C. D.; Durand, Michael T.; Hossain, Faisal
2015-01-01
Recent efforts have sought to estimate river discharge and other surface water-related quantities using spaceborne sensors, with better spatial coverage but worse temporal sampling as compared with in situ measurements. The Surface Water and Ocean Topography (SWOT) mission will provide river discharge estimates globally from space. However, questions on how to optimally use the spatially distributed but asynchronous satellite observations to generate continuous fields still exist. This paper presents a statistical model (River Kriging-RK), for estimating discharge time series in a river network in the context of the SWOT mission. RK uses discharge estimates at different locations and times to produce a continuous field using spatiotemporal kriging. A key component of RK is the space-time river discharge covariance, which was derived analytically from the diffusive wave approximation of Saint Venant's equations. The RK covariance also accounts for the loss of correlation at confluences. The model performed well in a case study on Ganges-Brahmaputra-Meghna (GBM) River system in Bangladesh using synthetic SWOT observations. The correlation model reproduced empirically derived values. RK (R2=0.83) outperformed other kriging-based methods (R2=0.80), as well as a simple time series linear interpolation (R2=0.72). RK was used to combine discharge from SWOT and in situ observations, improving estimates when the latter is included (R2=0.91). The proposed statistical concepts may eventually provide a feasible framework to estimate continuous discharge time series across a river network based on SWOT data, other altimetry missions, and/or in situ data.
Jin, Haofei; Yonezawa, Takahiro; Zhong, Yang; Kishino, Hirohisa; Hasegawa, Masami
2017-03-17
The giant rhinoceros beetles (Dynastini, Scarabaeidae, Coleoptera) are distributed in tropical and temperate regions in Asia, America and Africa. Recent molecular phylogenetic studies have revealed that the giant rhinoceros beetles can be divided into three clades representing Asia, America and Africa. Although a correlation between their evolution and the continental drift during the Pangean breakup was suggested, there is no accurate divergence time estimation among the three clades based on molecular data. Moreover, there is a long chronological gap between the timing of the Pangean breakup (Cretaceous: 110-148 Ma) and the emergence of the oldest fossil record (Oligocene: 33 Ma). In this study, we estimated their divergence times based on molecular data, using several combinations of fossil calibration sets, and obtained robust estimates. The inter-continental divergence events among the clades were estimated to have occurred about 99 Ma (Asian clade and others) and 78 Ma (American clade and African clade), both of which are after the Pangean breakup. These estimates suggest their inter-continental divergences occurred by overseas sweepstakes dispersal, rather than by vicariances of the population caused by the Pangean breakup.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Shannon, H. D.
2011-12-01
The USDA World Agricultural Outlook Board (WAOB) is responsible for monitoring weather and climate impacts on domestic and foreign crop development. One of WAOB's primary goals is to determine the net cumulative effect of weather and climate anomalies on final crop yields. To this end, a broad array of information is consulted, including maps, charts, and time series of recent weather, climate, and crop observations; numerical output from weather and crop models; and reports from the press, USDA attachés, and foreign governments. The resulting agricultural weather assessments are published in the Weekly Weather and Crop Bulletin, to keep farmers, policy makers, and commercial agricultural interests informed of weather and climate impacts on agriculture. Because both the amount and timing of precipitation significantly impact crop yields, WAOB often uses precipitation time series to identify growing seasons with similar weather patterns and help estimate crop yields for the current growing season, based on observed yields in analog years. Although, historically, these analog years are identified through visual inspection, the qualitative nature of this methodology sometimes precludes the definitive identification of the best analog year. One goal of this study is to introduce a more rigorous, statistical approach for identifying analog years. This approach is based on a modified coefficient of determination, termed the analog index (AI). The derivation of AI will be described. Another goal of this study is to compare the performance of AI for time series derived from surface-based observations vs. satellite-based measurements (NASA TRMM and other data). Five study areas and six growing seasons of data were analyzed (2003-2007 as potential analog years and 2008 as the target year). Results thus far show that, for all five areas, crop yield estimates derived from satellite-based precipitation data are closer to measured yields than are estimates derived from surface-based precipitation measurements. Work is continuing to include satellite-based surface soil moisture data and model-assimilated root zone soil moisture. This study is part of a larger effort to improve WAOB estimates by integrating NASA remote sensing observations and research results into WAOB's decision-making environment.
Refinement of the timing-based estimator of pulsar magnetic fields
NASA Astrophysics Data System (ADS)
Biryukov, Anton; Astashenok, Artyom; Beskin, Gregory
2017-04-01
Numerical simulations of realistic non-vacuum magnetospheres of isolated neutron stars have shown that pulsar spin-down luminosities depend weakly on the magnetic obliquity α. In particular, L ∝ B2(1 + sin 2α), where B is the magnetic field strength at the star surface. Being the most accurate expression to date, this result provides the opportunity to estimate B for a given radiopulsar with quite a high accuracy. In the current work, we present a refinement of the classical 'magneto-dipolar' formula for pulsar magnetic fields B_md = (3.2× 10^{19} G)√{P\\dot{P}}, where P is the neutron star spin period. The new, robust timing-based estimator is introduced as log B = log Bmd + ΔB(M, α), where the correction ΔB depends on the equation of state (EOS) of dense matter, the individual pulsar obliquity α and the mass M. Adopting state-of-the-art statistics for M and α we calculate the distributions of ΔB for a representative subset of 22 EOSs that do not contradict observations. It has been found that ΔB is distributed nearly normally, with the average in the range -0.5 to -0.25 dex and standard deviation σ[ΔB] ≈ 0.06 to 0.09 dex, depending on the adopted EOS. The latter quantity represents a formal uncertainty of the corrected estimation of log B because ΔB is weakly correlated with log Bmd. At the same time, if it is assumed that every considered EOS has the same chance of occurring in nature, then another, more generalized, estimator B* ≈ 3Bmd/7 can be introduced providing an unbiased value of the pulsar surface magnetic field with ˜30 per cent uncertainty with 68 per cent confidence. Finally, we discuss the possible impact of pulsar timing irregularities on the timing-based estimation of B and review the astrophysical applications of the obtained results.
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick
1987-01-01
The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Indigenous Mobility and School Attendance in Remote Australia: Cause or Effect?
ERIC Educational Resources Information Center
Taylor, John
2012-01-01
Despite claims of a negative impact on Indigenous school attendance due to mobility no attempt has been made to estimate the number of school-age Indigenous children away from a home base at any one time. This paper uses census data to derive such estimates for the first time. It finds that Indigenous children are mostly sedentary within their…
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
Austen, Emily J.; Weis, Arthur E.
2016-01-01
Our understanding of selection through male fitness is limited by the resource demands and indirect nature of the best available genetic techniques. Applying complementary, independent approaches to this problem can help clarify evolution through male function. We applied three methods to estimate selection on flowering time through male fitness in experimental populations of the annual plant Brassica rapa: (i) an analysis of mating opportunity based on flower production schedules, (ii) genetic paternity analysis, and (iii) a novel approach based on principles of experimental evolution. Selection differentials estimated by the first method disagreed with those estimated by the other two, indicating that mating opportunity was not the principal driver of selection on flowering time. The genetic and experimental evolution methods exhibited striking agreement overall, but a slight discrepancy between the two suggested that negative environmental covariance between age at flowering and male fitness may have contributed to phenotypic selection. Together, the three methods enriched our understanding of selection on flowering time, from mating opportunity to phenotypic selection to evolutionary response. The novel experimental evolution method may provide a means of examining selection through male fitness when genetic paternity analysis is not possible. PMID:26911957
Inference on periodicity of circadian time series.
Costa, Maria J; Finkenstädt, Bärbel; Roche, Véronique; Lévi, Francis; Gould, Peter D; Foreman, Julia; Halliday, Karen; Hall, Anthony; Rand, David A
2013-09-01
Estimation of the period length of time-course data from cyclical biological processes, such as those driven by the circadian pacemaker, is crucial for inferring the properties of the biological clock found in many living organisms. We propose a methodology for period estimation based on spectrum resampling (SR) techniques. Simulation studies show that SR is superior and more robust to non-sinusoidal and noisy cycles than a currently used routine based on Fourier approximations. In addition, a simple fit to the oscillations using linear least squares is available, together with a non-parametric test for detecting changes in period length which allows for period estimates with different variances, as frequently encountered in practice. The proposed methods are motivated by and applied to various data examples from chronobiology.
NASA Astrophysics Data System (ADS)
Yan, Xiangwu; Deng, Haoran; Wang, Ling; Guo, Qi
2017-12-01
It is essential to estimate the state of charge (SOC) and state of health (SOH) of the monomer battery in the electric vehicle li-ion power battery accurately for extending the li-ion power battery life. Based on the battery Thevenin equivalent circuit model, the paper uses adaptive unscented Kalman filter (AUKF) to estimate the inner ohmic resistance and the state of charge in real time, according to the function between the inner ohmic resistance and the state of health, the state of health can be estimated in real time. The battery charged and discharged experiments were done under two different conditions to verify the feasibility and accuracy of this method.
Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J
2009-10-01
This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R.
2016-01-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator’s temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector’s single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal. PMID:27295658
Berg, Eric; Roncali, Emilie; Hutchcroft, Will; Qi, Jinyi; Cherry, Simon R
2016-11-01
In a scintillation detector, the light generated in the scintillator by a gamma interaction is converted to photoelectrons by a photodetector and produces a time-dependent waveform, the shape of which depends on the scintillator properties and the photodetector response. Several depth-of-interaction (DOI) encoding strategies have been developed that manipulate the scintillator's temporal response along the crystal length and therefore require pulse shape discrimination techniques to differentiate waveform shapes. In this work, we demonstrate how maximum likelihood (ML) estimation methods can be applied to pulse shape discrimination to better estimate deposited energy, DOI and interaction time (for time-of-flight (TOF) PET) of a gamma ray in a scintillation detector. We developed likelihood models based on either the estimated detection times of individual photoelectrons or the number of photoelectrons in discrete time bins, and applied to two phosphor-coated crystals (LFS and LYSO) used in a previously developed TOF-DOI detector concept. Compared with conventional analytical methods, ML pulse shape discrimination improved DOI encoding by 27% for both crystals. Using the ML DOI estimate, we were able to counter depth-dependent changes in light collection inherent to long scintillator crystals and recover the energy resolution measured with fixed depth irradiation (~11.5% for both crystals). Lastly, we demonstrated how the Richardson-Lucy algorithm, an iterative, ML-based deconvolution technique, can be applied to the digitized waveforms to deconvolve the photodetector's single photoelectron response and produce waveforms with a faster rising edge. After deconvolution and applying DOI and time-walk corrections, we demonstrated a 13% improvement in coincidence timing resolution (from 290 to 254 ps) with the LFS crystal and an 8% improvement (323 to 297 ps) with the LYSO crystal.
Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas
NASA Astrophysics Data System (ADS)
Harabi, F.; Akkar, S.; Gharsallah, A.
2016-07-01
Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.
Analyzing Double Delays at Newark Liberty International Airport
NASA Technical Reports Server (NTRS)
Evans, Antony D.; Lee, Paul
2016-01-01
When weather or congestion impacts the National Airspace System, multiple different Traffic Management Initiatives can be implemented, sometimes with unintended consequences. One particular inefficiency that is commonly identified is in the interaction between Ground Delay Programs (GDPs) and time based metering of internal departures, or TMA scheduling. Internal departures under TMA scheduling can take large GDP delays, followed by large TMA scheduling delays, because they cannot be easily fitted into the overhead stream. In this paper we examine the causes of these double delays through an analysis of arrival operations at Newark Liberty International Airport (EWR) from June to August 2010. Depending on how the double delay is defined between 0.3 percent and 0.8 percent of arrivals at EWR experienced double delays in this period. However, this represents between 21 percent and 62 percent of all internal departures in GDP and TMA scheduling. A deep dive into the data reveals that two causes of high internal departure scheduling delays are upstream flights making up time between their estimated departure clearance times (EDCTs) and entry into time based metering, which undermines the sequencing and spacing underlying the flight EDCTs, and high demand on TMA, when TMA airborne metering delays are high. Data mining methods (currently) including logistic regression, support vector machines and K-nearest neighbors are used to predict the occurrence of double delays and high internal departure scheduling delays with accuracies up to 0.68. So far, key indicators of double delay and high internal departure scheduling delay are TMA virtual runway queue size, and the degree to which estimated runway demand based on TMA estimated times of arrival has changed relative to the estimated runway demand based on EDCTs. However, more analysis is needed to confirm this.
Estimating the Counterfactual: How Many Uninsured Adults Would There Be Today Without the ACA?
Blumberg, Linda J; Garrett, Bowen; Holahan, John
2016-01-01
Time lags in receiving data from long-standing, large federal surveys complicate real-time estimation of the coverage effects of full Affordable Care Act (ACA) implementation. Fast-turnaround household surveys fill some of the void in data on recent changes to insurance coverage, but they lack the historical data that allow analysts to account for trends that predate the ACA, economic fluctuations, and earlier public program expansions when predicting how many people would be uninsured without comprehensive health care reform. Using data from the Current Population Survey (CPS) from 2000 to 2012 and the Health Reform Monitoring Survey (HRMS) data for 2013 and 2015, this article develops an approach to estimate the number of people who would be uninsured in the absence of the ACA and isolates the change in coverage as of March 2015 that can be attributed to the ACA. We produce counterfactual forecasts of the number of uninsured absent the ACA for 9 age-income groups and compare these estimates with 2015 estimates based on HRMS relative coverage changes applied to CPS-based population estimates. As of March 2015, we find the ACA has reduced the number of uninsured adults by 18.1 million compared with the number who would have been uninsured at that time had the law not been implemented. That decline represents a 46% reduction in the number of nonelderly adults without insurance. The approach developed here can be applied to other federal data and timely surveys to provide a range of estimates of the overall effects of reform. © The Author(s) 2016.
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterman, Gordon; Keating, Kristina; Binley, Andrew
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Osterman, Gordon; Keating, Kristina; Binley, Andrew; ...
2016-03-18
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Estimation of a Historic Mercury Load Function for Lake Michigan using Dated Sediment Cores
Box cores collected between 1994 and 1996 were used to estimate historic mercury loads to Lake Michigan. Based on a kriging spatial interpolation of 54 Pb-210 dated cores, 228 metric tons of mercury are stored in the lake’s sediments (excluding Green Bay). To estimate the time ...
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield
Robert B. Thomas
1986-01-01
Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
DOT National Transportation Integrated Search
2008-08-01
ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...
Population Estimation in Singapore Based on Remote Sensing and Open Data
NASA Astrophysics Data System (ADS)
Guo, H.; Cao, K.; Wang, P.
2017-09-01
Population estimation statistics are widely used in government, commercial and educational sectors for a variety of purposes. With growing emphases on real-time and detailed population information, data users nowadays have switched from traditional census data to more technology-based data source such as LiDAR point cloud and High-Resolution Satellite Imagery. Nevertheless, such data are costly and periodically unavailable. In this paper, the authors use West Coast District, Singapore as a case study to investigate the applicability and effectiveness of using satellite image from Google Earth for extraction of building footprint and population estimation. At the same time, volunteered geographic information (VGI) is also utilized as ancillary data for building footprint extraction. Open data such as Open Street Map OSM could be employed to enhance the extraction process. In view of challenges in building shadow extraction, this paper discusses several methods including buffer, mask and shape index to improve accuracy. It also illustrates population estimation methods based on building height and number of floor estimates. The results show that the accuracy level of housing unit method on population estimation can reach 92.5 %, which is remarkably accurate. This paper thus provides insights into techniques for building extraction and fine-scale population estimation, which will benefit users such as urban planners in terms of policymaking and urban planning of Singapore.
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory
Mousavi, Ali; Clarkson, P. John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation. PMID:27170899
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory.
Komashie, Alexander; Mousavi, Ali; Clarkson, P John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation.
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang
2017-10-01
Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.
X-Ray Detection and Processing Models for Spacecraft Navigation and Timing
NASA Technical Reports Server (NTRS)
Sheikh, Suneel; Hanson, John
2013-01-01
The current primary method of deepspace navigation is the NASA Deep Space Network (DSN). High-performance navigation is achieved using Delta Differential One-Way Range techniques that utilize simultaneous observations from multiple DSN sites, and incorporate observations of quasars near the line-of-sight to a spacecraft in order to improve the range and angle measurement accuracies. Over the past four decades, x-ray astronomers have identified a number of xray pulsars with pulsed emissions having stabilities comparable to atomic clocks. The x-ray pulsar-based navigation and time determination (XNAV) system uses phase measurements from these sources to establish autonomously the position of the detector, and thus the spacecraft, relative to a known reference frame, much as the Global Positioning System (GPS) uses phase measurements from radio signals from several satellites to establish the position of the user relative to an Earth-centered fixed frame of reference. While a GPS receiver uses an antenna to detect the radio signals, XNAV uses a detector array to capture the individual xray photons from the x-ray pulsars. The navigation solution relies on detailed xray source models, signal processing, navigation and timing algorithms, and analytical tools that form the basis of an autonomous XNAV system. Through previous XNAV development efforts, some techniques have been established to utilize a pulsar pulse time-of-arrival (TOA) measurement to correct a position estimate. One well-studied approach, based upon Kalman filter methods, optimally adjusts a dynamic orbit propagation solution based upon the offset in measured and predicted pulse TOA. In this delta position estimator scheme, previously estimated values of spacecraft position and velocity are utilized from an onboard orbit propagator. Using these estimated values, the detected arrival times at the spacecraft of pulses from a pulsar are compared to the predicted arrival times defined by the pulsar s pulse timing model. A discrepancy provides an estimate of the spacecraft position offset, since an error in position will relate to the measured time offset of a pulse along the line of sight to the pulsar. XNAV researchers have been developing additional enhanced approaches to process the photon TOAs to arrive at an estimate of spacecraft position, including those using maximum-likelihood estimation, digital phase locked loops, and "single photon processing" schemes that utilize all available time data associated with each photon. Using pulsars from separate, non-coplanar locations provides range and range-rate measurements in each pulsar s direction. Combining these different pulsar measurements solves for offsets in position and velocity in three dimensions, and provides accurate overall navigation for deep space vehicles.
Pronk, Anjoeka; Stewart, Patricia A.; Coble, Joseph B.; Katki, Hormuzd A.; Wheeler, David C.; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T.; Friesen, Melissa C.
2012-01-01
Objectives Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to the questionnaire responses to assess diesel exhaust exposure in the New England Bladder Cancer Study, a population-based case-control study. Methods 2,631 participants reported 14,983 jobs; 2,749 jobs were administered questionnaires (‘modules’) with diesel-relevant questions. We applied decision rules to assign exposure metrics based solely on the occupational history responses (OH estimates) and based on the module responses (module estimates); we combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed one at a time to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module, and one-by-one review estimates. Results The proportion of exposed jobs was 20–25% for all jobs, depending on approach, and 54–60% for jobs with diesel-relevant modules. The OH/module and one-by-one review had moderately high agreement for all jobs (κw=0.68–0.81) and for jobs with diesel-relevant modules (κw=0.62–0.78) for the probability, intensity, and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. Conclusions The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies. PMID:22843440
Efficient mental workload estimation using task-independent EEG features.
Roy, R N; Charbonnier, S; Campagne, A; Bonnet, S
2016-04-01
Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man's vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
Efficient mental workload estimation using task-independent EEG features
NASA Astrophysics Data System (ADS)
Roy, R. N.; Charbonnier, S.; Campagne, A.; Bonnet, S.
2016-04-01
Objective. Mental workload is frequently estimated by EEG-based mental state monitoring systems. Usually, these systems use spectral markers and event-related potentials (ERPs). To our knowledge, no study has directly compared their performance for mental workload assessment, nor evaluated the stability in time of these markers and of the performance of the associated mental workload estimators. This study proposes a comparison of two processing chains, one based on the power in five frequency bands, and one based on ERPs, both including a spatial filtering step (respectively CSP and CCA), an FLDA classification and a 10-fold cross-validation. Approach. To get closer to a real life implementation, spectral markers were extracted from a short window (i.e. towards reactive systems) that did not include any motor activity and the analyzed ERPs were elicited by a task-independent probe that required a reflex-like answer (i.e. close to the ones required by dead man’s vigilance devices). The data were acquired from 20 participants who performed a Sternberg memory task for 90 min (i.e. 2/6 digits to memorize) inside which a simple detection task was inserted. The results were compared both when the testing was performed at the beginning and end of the session. Main results. Both chains performed significantly better than random; however the one based on the spectral markers had a low performance (60%) and was not stable in time. Conversely, the ERP-based chain gave very high results (91%) and was stable in time. Significance. This study demonstrates that an efficient and stable in time workload estimation can be achieved using task-independent spatially filtered ERPs elicited in a minimally intrusive manner.
A revised timescale for human evolution based on ancient mitochondrial genomes.
Fu, Qiaomei; Mittnik, Alissa; Johnson, Philip L F; Bos, Kirsten; Lari, Martina; Bollongino, Ruth; Sun, Chengkai; Giemsch, Liane; Schmitz, Ralf; Burger, Joachim; Ronchitelli, Anna Maria; Martini, Fabio; Cremonesi, Renata G; Svoboda, Jiří; Bauer, Peter; Caramelli, David; Castellano, Sergi; Reich, David; Pääbo, Svante; Krause, Johannes
2013-04-08
Recent analyses of de novo DNA mutations in modern humans have suggested a nuclear substitution rate that is approximately half that of previous estimates based on fossil calibration. This result has led to suggestions that major events in human evolution occurred far earlier than previously thought. Here, we use mitochondrial genome sequences from ten securely dated ancient modern humans spanning 40,000 years as calibration points for the mitochondrial clock, thus yielding a direct estimate of the mitochondrial substitution rate. Our clock yields mitochondrial divergence times that are in agreement with earlier estimates based on calibration points derived from either fossils or archaeological material. In particular, our results imply a separation of non-Africans from the most closely related sub-Saharan African mitochondrial DNAs (haplogroup L3) that occurred less than 62-95 kya. Though single loci like mitochondrial DNA (mtDNA) can only provide biased estimates of population divergence times, they can provide valid upper bounds. Our results exclude most of the older dates for African and non-African population divergences recently suggested by de novo mutation rate estimates in the nuclear genome. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wasza, Jakob; Bauer, Sebastian; Hornegger, Joachim
2012-01-01
Over the last years, range imaging (RI) techniques have been proposed for patient positioning and respiration analysis in motion compensation. Yet, current RI based approaches for patient positioning employ rigid-body transformations, thus neglecting free-form deformations induced by respiratory motion. Furthermore, RI based respiration analysis relies on non-rigid registration techniques with run-times of several seconds. In this paper we propose a real-time framework based on RI to perform respiratory motion compensated positioning and non-rigid surface deformation estimation in a joint manner. The core of our method are pre-procedurally obtained 4-D shape priors that drive the intra-procedural alignment of the patient to the reference state, simultaneously yielding a rigid-body table transformation and a free-form deformation accounting for respiratory motion. We show that our method outperforms conventional alignment strategies by a factor of 3.0 and 2.3 in the rotation and translation accuracy, respectively. Using a GPU based implementation, we achieve run-times of 40 ms.
[Winter wheat area estimation with MODIS-NDVI time series based on parcel].
Li, Le; Zhang, Jin-shui; Zhu, Wen-quan; Hu, Tan-gao; Hou, Dong
2011-05-01
Several attributes of MODIS (moderate resolution imaging spectrometer) data, especially the short temporal intervals and the global coverage, provide an extremely efficient way to map cropland and monitor its seasonal change. However, the reliability of their measurement results is challenged because of the limited spatial resolution. The parcel data has clear geo-location and obvious boundary information of cropland. Also, the spectral differences and the complexity of mixed pixels are weak in parcels. All of these make that area estimation based on parcels presents more advantage than on pixels. In the present study, winter wheat area estimation based on MODIS-NDVI time series has been performed with the support of cultivated land parcel in Tongzhou, Beijing. In order to extract the regional winter wheat acreage, multiple regression methods were used to simulate the stable regression relationship between MODIS-NDVI time series data and TM samples in parcels. Through this way, the consistency of the extraction results from MODIS and TM can stably reach up to 96% when the amount of samples accounts for 15% of the whole area. The results shows that the use of parcel data can effectively improve the error in recognition results in MODIS-NDVI based multi-series data caused by the low spatial resolution. Therefore, with combination of moderate and low resolution data, the winter wheat area estimation became available in large-scale region which lacks completed medium resolution images or has images covered with clouds. Meanwhile, it carried out the preliminary experiments for other crop area estimation.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
[WebSurvCa: web-based estimation of death and survival probabilities in a cohort].
Clèries, Ramon; Ameijide, Alberto; Buxó, Maria; Vilardell, Mireia; Martínez, José Miguel; Alarcón, Francisco; Cordero, David; Díez-Villanueva, Ana; Yasui, Yutaka; Marcos-Gragera, Rafael; Vilardell, Maria Loreto; Carulla, Marià; Galceran, Jaume; Izquierdo, Ángel; Moreno, Víctor; Borràs, Josep M
2018-01-19
Relative survival has been used as a measure of the temporal evolution of the excess risk of death of a cohort of patients diagnosed with cancer, taking into account the mortality of a reference population. Once the excess risk of death has been estimated, three probabilities can be computed at time T: 1) the crude probability of death associated with the cause of initial diagnosis (disease under study), 2) the crude probability of death associated with other causes, and 3) the probability of absolute survival in the cohort at time T. This paper presents the WebSurvCa application (https://shiny.snpstats.net/WebSurvCa/), whereby hospital-based and population-based cancer registries and registries of other diseases can estimate such probabilities in their cohorts by selecting the mortality of the relevant region (reference population). Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Karimzadeh, Iman; Khalili, Hossein
2016-06-06
Serum cystatin C (Cys C) has a number of advantages over serum creatinine in the evaluation of kidney function. Apart from Cys C level itself, several formulas have also been introduced in different clinical settings for the estimation of glomerular filtration rate (GFR) based upon serum Cys C level. The aim of the present study was to compare a serum Cys C-based equation with Cockcroft-Gault serum creatinine-based formula, both used in the calculation of GFR, in patients receiving amphotericin B. Fifty four adult patients with no history of acute or chronic kidney injury having been planned to receive conventional amphotericin B for an anticipated duration of at least 1 week for any indication were recruited. At three time points during amphotericin B treatment, including days 0, 7, and 14, serum cystatin C as well as creatinine levels were measured. GFR at the above time points was estimated by both creatinine (Cockcroft-Gault) and serum Cys C based equations. There was significant correlation between creatinine-based and Cys C-based GFR values at days 0 (R = 0.606, P = 0.001) and 7 (R = 0.714, P < 0.001). In contrast to GFR estimated by the Cockcroft-Gault equation, the mean (95 % confidence interval) Cys C-based GFR values at different studied time points were comparable within as well as between patients with and without amphotericin B nephrotoxicity. Our results suggested that the Gentian Cys C-based GFR equation correlated significantly with the Cockcroft-Gault formula at least at the early time period of treatment with amphotericin B. Graphical abstract Comparison between a serum creatinine-and a cystatin C-based glomerular filtration rate equation in patients receiving amphotericin B.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.
2006-01-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.
Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.
Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin
2018-04-25
Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.
Garcia, Jordan A; Mistry, Bipin; Hardy, Stephen; Fracchia, Mary Shannon; Hersh, Cheryl; Wentland, Carissa; Vadakekalam, Joseph; Kaplan, Robert; Hartnick, Christopher J
2017-09-01
Providing high-value healthcare to patients is increasingly becoming an objective for providers including those at multidisciplinary aerodigestive centers. Measuring value has two components: 1) identify relevant health outcomes and 2) determine relevant treatment costs. Via their inherent structure, multidisciplinary care units consolidate care for complex patients. However, their potential impact on decreasing healthcare costs is less clear. The goal of this study was to estimate the potential cost savings of treating patients with laryngeal clefts at multidisciplinary aerodigestive centers. Retrospective chart review. Time-driven activity-based costing was used to estimate the cost of care for patients with laryngeal cleft seen between 2008 and 2013 at the Massachusetts Eye and Ear Infirmary Pediatric Aerodigestive Center. Retrospective chart review was performed to identify clinic utilization by patients as well as patient diet outcomes after treatment. Patients were stratified into neurologically complex and neurologically noncomplex groups. The cost of care for patients requiring surgical intervention was five and three times as expensive of the cost of care for patients not requiring surgery for neurologically noncomplex and complex patients, respectively. Following treatment, 50% and 55% of complex and noncomplex patients returned to normal diet, whereas 83% and 87% of patients experienced improved diets, respectively. Additionally, multidisciplinary team-based care for children with laryngeal clefts potentially achieves 20% to 40% cost savings. These findings demonstrate how time-driven activity-based costing can be used to estimate and compare patient costs in multidisciplinary aerodigestive centers. 2c. Laryngoscope, 127:2152-2158, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.
2015-05-01
The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.
Challan, Mohsen B
2016-06-01
The present study aims to estimate the residence time of groundwater based on bomb-produced (36)Cl. (36)Cl/Cl ratios in the water samples are determined by inductively coupled plasma mass spectrometry and liquid scintillation counting. (36)Cl/Cl ratios in the groundwater were estimated to be 1.0-2.0 × 10(-12). Estimates of residence time were obtained by comparing the measured bomb-derived (36)Cl concentrations in groundwater with the background reference. Dating based on a (36)Cl bomb pulse may be more reliable and sensitive for groundwater recharged before 1975, back as far as the mid-1950s. The above (36)Cl background concentration was deduced by determining the background-corrected Dye-3 ice core data from the frozen Arctic data, according to the estimated total (36)Cl resources. The residence time of 7.81 × 10(4) y is obtained from extrapolated groundwater flow velocity. (36)Cl concentration in groundwater does not reflect the input of bomb pulse (36)Cl, and it belongs to the era before 1950.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
FPGA-based architecture for motion recovering in real-time
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar
2002-03-01
A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.
COMDYN: Software to study the dynamics of animal communities using a capture-recapture approach
Hines, J.E.; Boulinier, T.; Nichols, J.D.; Sauer, J.R.; Pollock, K.H.
1999-01-01
COMDYN is a set of programs developed for estimation of parameters associated with community dynamics using count data from two locations or time periods. It is Internet-based, allowing remote users either to input their own data, or to use data from the North American Breeding Bird Survey for analysis. COMDYN allows probability of detection to vary among species and among locations and time periods. The basic estimator for species richness underlying all estimators is the jackknife estimator proposed by Burnham and Overton. Estimators are presented for quantities associated with temporal change in species richness, including rate of change in species richness over time, local extinction probability, local species turnover and number of local colonizing species. Estimators are also presented for quantities associated with spatial variation in species richness, including relative richness at two locations and proportion of species present in one location that are also present at a second location. Application of the estimators to species richness estimation has been previously described and justified. The potential applications of these programs are discussed.
Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias
2012-01-01
Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID:23412990
Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery
NASA Astrophysics Data System (ADS)
Woods, B. K.; Wei, L. H.; Connor, T. C.
2014-12-01
With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.
Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.
2014-12-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.
Seasonal variability in global eddy diffusion and the effect on neutral density
NASA Astrophysics Data System (ADS)
Pilinski, M. D.; Crowley, G.
2015-04-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.
Michael D. Erickson; Curt C. Hassler; Chris B. LeDoux
1991-01-01
Continuous time and motion study techniques were used to develop productivity and cost estimators for the skidding component of ground-based logging systems, operating on steep terrain using preplanned skid roads. Comparisons of productivity and costs were analyzed for an overland random access skidding method, verses a skidding method utilizing a network of preplanned...
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.
Real-time video analysis for retail stores
NASA Astrophysics Data System (ADS)
Hassan, Ehtesham; Maurya, Avinash K.
2015-03-01
With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.
NASA Technical Reports Server (NTRS)
Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.
2010-01-01
The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.
Base Stability of Aminocyclopropeniums
2017-11-01
stability, a series of aminocyclopropeniums were synthesized and their base stability probed in situ using time -resolved proton nuclear magnetic resonance...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...tested for their utility in anion exchange membranes for alkaline fuel cells. A series of aminocyclopropeniums were synthesized and their base
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.
Torres, Sergio N; Pezoa, Jorge E; Hayat, Majeed M
2003-10-10
What is to our knowledge a new scene-based algorithm for nonuniformity correction in infrared focal-plane array sensors has been developed. The technique is based on the inverse covariance form of the Kalman filter (KF), which has been reported previously and used in estimating the gain and bias of each detector in the array from scene data. The gain and the bias of each detector in the focal-plane array are assumed constant within a given sequence of frames, corresponding to a certain time and operational conditions, but they are allowed to randomly drift from one sequence to another following a discrete-time Gauss-Markov process. The inverse covariance form filter estimates the gain and the bias of each detector in the focal-plane array and optimally updates them as they drift in time. The estimation is performed with considerably higher computational efficiency than the equivalent KF. The ability of the algorithm in compensating for fixed-pattern noise in infrared imagery and in reducing the computational complexity is demonstrated by use of both simulated and real data.
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea
2016-08-11
Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.
Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide
2018-03-13
Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.
Barongo, Mike B; Ståhl, Karl; Bett, Bernard; Bishop, Richard P; Fèvre, Eric M; Aliro, Tony; Okoth, Edward; Masembe, Charles; Knobel, Darryn; Ssematimba, Amos
2015-01-01
African swine fever (ASF) is a highly contagious, lethal and economically devastating haemorrhagic disease of domestic pigs. Insights into the dynamics and scale of virus transmission can be obtained from estimates of the basic reproduction number (R0). We estimate R0 for ASF virus in small holder, free-range pig production system in Gulu, Uganda. The estimation was based on data collected from outbreaks that affected 43 villages (out of the 289 villages with an overall pig population of 26,570) between April 2010 and November 2011. A total of 211 outbreaks met the criteria for inclusion in the study. Three methods were used, specifically; (i) GIS- based identification of the nearest infectious neighbour based on the Euclidean distance between outbreaks, (ii) epidemic doubling time, and (iii) a compartmental susceptible-infectious (SI) model. For implementation of the SI model, three approaches were used namely; curve fitting (CF), a linear regression model (LRM) and the SI/N proportion. The R0 estimates from the nearest infectious neighbour and epidemic doubling time methods were 3.24 and 1.63 respectively. Estimates from the SI-based method were 1.58 for the CF approach, 1.90 for the LRM, and 1.77 for the SI/N proportion. Since all these values were above one, they predict the observed persistence of the virus in the population. We hypothesize that the observed variation in the estimates is a consequence of the data used. Higher resolution and temporally better defined data would likely reduce this variation. This is the first estimate of R0 for ASFV in a free range smallholder pig keeping system in sub-Saharan Africa and highlights the requirement for more efficient application of available disease control measures.
Validation of the work and health interview.
Stewart, Walter F; Ricci, Judith A; Leotta, Carol; Chee, Elsbeth
2004-01-01
Instruments that measure the impact of illness on work do not usually provide a measure that can be directly translated into lost hours or costs. We describe the validation of the Work and Health Interview (WHI), a questionnaire that provides a measure of lost productive time (LPT) from work absence and reduced performance at work. A sample (n = 67) of inbound phone call agents was recruited for the study. Validity of the WHI was assessed over a 2-week period in reference to workplace data (i.e. absence time, time away from call station and electronic continuous performance) and repeated electronic diary data (n = 48) obtained approximately eight times a day to estimate time not working (i.e. a component of reduced performance). The mean (median) missed work time estimate for any reason was 11 (8.0) and 12.9 (8.0) hours in a 2-week period from the WHI and workplace data, respectively, with a Pearson's (Spearman's) correlation of 0.84 (0.76). The diary-based mean (median) estimate of time not working while at work was 3.9 (2.8) hours compared with the WHI estimate of 5.7 (3.2) hours with a Pearson's (Spearman's) correlation of 0.19 (0.33). The 2-week estimate of total productive time from the diary was 67.2 hours compared with 67.8 hours from the WHI, with a Pearson's (Spearman's) correlation of 0.50 (0.46). At a population level, the WHI provides an accurate estimate of missed time from work and total productive time when compared with workplace and diary estimates. At an individual level, the WHI measure of total missed time, but not reduced performance time, is moderately accurate.
Online Cross-Validation-Based Ensemble Learning
Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark
2017-01-01
Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. PMID:28474419
Using Grain-Size Distribution Methods for Estimation of Air Permeability.
Wang, Tiejun; Huang, Yuanyang; Chen, Xunhong; Chen, Xi
2016-01-01
Knowledge of air permeability (ka ) at dry conditions is critical for the use of air flow models in porous media; however, it is usually difficult and time consuming to measure ka at dry conditions. It is thus desirable to estimate ka at dry conditions from other readily obtainable properties. In this study, the feasibility of using information derived from grain-size distributions (GSDs) for estimating ka at dry conditions was examined. Fourteen GSD-based equations originally developed for estimating saturated hydraulic conductivity were tested using ka measured at dry conditions in both undisturbed and disturbed river sediment samples. On average, the estimated ka from all the equations, except for the method of Slichter, differed by less than ± 4 times from the measured ka for both undisturbed and disturbed groups. In particular, for the two sediment groups, the results given by the methods of Terzaghi and Hazen-modified were comparable to the measured ka . In addition, two methods (e.g., Barr and Beyer) for the undisturbed samples and one method (e.g., Hazen-original) for the undisturbed samples were also able to produce comparable ka estimates. Moreover, after adjusting the values of the coefficient C in the GSD-based equations, the estimation of ka was significantly improved with the differences between the measured and estimated ka less than ±4% on average (except for the method of Barr). As demonstrated by this study, GSD-based equations may provide a promising and efficient way to estimate ka at dry conditions. © 2015, National Ground Water Association.
Thomas, Kevin V; Amador, Arturo; Baz-Lomba, Jose Antonio; Reid, Malcolm
2017-10-03
Wastewater-based epidemiology is an established approach for quantifying community drug use and has recently been applied to estimate population exposure to contaminants such as pesticides and phthalate plasticizers. A major source of uncertainty in the population weighted biomarker loads generated is related to estimating the number of people present in a sewer catchment at the time of sample collection. Here, the population quantified from mobile device-based population activity patterns was used to provide dynamic population normalized loads of illicit drugs and pharmaceuticals during a known period of high net fluctuation in the catchment population. Mobile device-based population activity patterns have for the first time quantified the high degree of intraday, week, and month variability within a specific sewer catchment. Dynamic population normalization showed that per capita pharmaceutical use remained unchanged during the period when static normalization would have indicated an average reduction of up to 31%. Per capita illicit drug use increased significantly during the monitoring period, an observation that was only possible to measure using dynamic population normalization. The study quantitatively confirms previous assessments that population estimates can account for uncertainties of up to 55% in static normalized data. Mobile device-based population activity patterns allow for dynamic normalization that yields much improved temporal and spatial trend analysis.
Time providing care outside visits in a home-based primary care program
Pedowitz, Elizabeth J.; Ornstein, Katherine A.; Farber, Jeffrey; DeCherrie, Linda V.
2016-01-01
Background/Objectives Homebound elderly patients with chronic medical illnesses face multiple barriers to care. Primary care physicians (PCPs) devote a significant amount of time to care apart from actual office visits, but there is little quantification of such time by physicians who provide primary care in the home. This article assesses exactly how much time physicians in a large home based primary care (HBPC) program spend providing care outside of home visits. Unreimbursed time, as well as patient and provider-related factors that may contribute to that increased time, are considered. Design Mount Sinai Visiting Doctors (MSVD) providers filled out research forms for every interaction involving care provision outside of home visits. Data collected included: length of interaction, mode, nature, and whom the interaction was with for 3 weeks. Setting/Participants MSVD is an academic home-visit program in Manhattan, NY. All PCPs in MSVD (n=14) agreed to participate. Measurements Time data were analyzed using a comprehensive estimate and conservative estimates to quantify unbillable time. Results Data on 1151 interactions for 537 patients were collected. An average 8.2 hours/week were spent providing non-home visit care for a full-time provider. Using the most conservative estimates, 3.6 hours/week was estimated to be unreimbursed per full-time provider. No significant differences in interaction times were found among dementia vs. non-dementia patients, new vs. non-new patients, and primary-panel vs. covered patients. Conclusion Findings suggest that HBPC providers spend substantial time providing care outside home visits, much of which goes unrecognized in the current reimbursement system. These findings may help guide practice development and creation of new payment systems for HBPC and similar models of care. PMID:24802078
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Real Time Data Management for Estimating Probabilities of Incidents and Near Misses
NASA Astrophysics Data System (ADS)
Stanitsas, P. D.; Stephanedes, Y. J.
2011-08-01
Advances in real-time data collection, data storage and computational systems have led to development of algorithms for transport administrators and engineers that improve traffic safety and reduce cost of road operations. Despite these advances, problems in effectively integrating real-time data acquisition, processing, modelling and road-use strategies at complex intersections and motorways remain. These are related to increasing system performance in identification, analysis, detection and prediction of traffic state in real time. This research develops dynamic models to estimate the probability of road incidents, such as crashes and conflicts, and incident-prone conditions based on real-time data. The models support integration of anticipatory information and fee-based road use strategies in traveller information and management. Development includes macroscopic/microscopic probabilistic models, neural networks, and vector autoregressions tested via machine vision at EU and US sites.
Pathfinder. Volume 9, Number 2, March/April 2011
2011-03-01
vides audio, video, desktop sharing and chat.” The platform offers a real-time, Web- based presentation tool to create information and general...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or
Using convolutional decoding to improve time delay and phase estimation in digital communications
Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM
2010-01-26
The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.
Hybrid Visible Light and Ultrasound-Based Sensor for Distance Estimation
Rabadan, Jose; Guerra, Victor; Rodríguez, Rafael; Rufo, Julio; Luna-Rivera, Martin; Perez-Jimenez, Rafael
2017-01-01
Distance estimation plays an important role in location-based services, which has become very popular in recent years. In this paper, a new short range cricket sensor-based approach is proposed for indoor location applications. This solution uses Time Difference of Arrival (TDoA) between an optical and an ultrasound signal which are transmitted simultaneously, to estimate the distance from the base station to the mobile receiver. The measurement of the TDoA at the mobile receiver endpoint is proportional to the distance. The use of optical and ultrasound signals instead of the conventional radio wave signal makes the proposed approach suitable for environments with high levels of electromagnetic interference or where the propagation of radio frequencies is entirely restricted. Furthermore, unlike classical cricket systems, a double-way measurement procedure is introduced, allowing both the base station and mobile node to perform distance estimation simultaneously. PMID:28208584
Time-to-impact estimation in passive missile warning systems
NASA Astrophysics Data System (ADS)
Şahıngıl, Mehmet Cihan
2017-05-01
A missile warning system can detect the incoming missile threat(s) and automatically cue the other Electronic Attack (EA) systems in the suit, such as Directed Infrared Counter Measure (DIRCM) system and/or Counter Measure Dispensing System (CMDS). Most missile warning systems are currently based on passive sensor technology operating in either Solar Blind Ultraviolet (SBUV) or Midwave Infrared (MWIR) bands on which there is an intensive emission from the exhaust plume of the threatening missile. Although passive missile warning systems have some clear advantages over pulse-Doppler radar (PDR) based active missile warning systems, they show poorer performance in terms of time-to-impact (TTI) estimation which is critical for optimizing the countermeasures and also "passive kill assessment". In this paper, we consider this problem, namely, TTI estimation from passive measurements and present a TTI estimation scheme which can be used in passive missile warning systems. Our problem formulation is based on Extended Kalman Filter (EKF). The algorithm uses the area parameter of the threat plume which is derived from the used image frame.
Kim, Eunjoo; Tani, Kotaro; Kunishima, Naoaki; Kurihara, Osamu; Sakai, Kazuo; Akashi, Makoto
2016-11-01
Estimating the early internal doses to residents in the Fukushima Daiichi Nuclear Power Station accident is a difficult task because limited human/environmental measurement data are available. Hence, the feasibility of using atmospheric dispersion simulations created by the Worldwide version of System for Prediction of Environmental Emergency Dose Information 2nd Version (WSPEEDI-II) in the estimation was examined in the present study. This examination was done by comparing the internal doses evaluated based on the human measurements with those calculated using time series air concentration maps ( 131 I and 137 Cs) generated by WSPEEDI-II. The results showed that the latter doses were several times higher than the former doses. However, this discrepancy could be minimised by taking into account personal behaviour data that will be available soon. This article also presents the development of a prototype system for estimating the internal dose based on the simulations. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Geochemical Evidence for Calcification from the Drake Passage Time-series
NASA Astrophysics Data System (ADS)
Munro, D. R.; Lovenduski, N. S.; Takahashi, T.; Stephens, B. B.; Newberger, T.; Dierssen, H. M.; Randolph, K. L.; Freeman, N. M.; Bushinsky, S. M.; Key, R. M.; Sarmiento, J. L.; Sweeney, C.
2016-12-01
Satellite imagery suggests high particulate inorganic carbon within a circumpolar region north of the Antarctic Polar Front (APF), but in situ evidence for calcification in this region is sparse. Given the geochemical relationship between calcification and total alkalinity (TA), seasonal changes in surface concentrations of potential alkalinity (PA), which accounts for changes in TA due to variability in salinity and nitrate, can be used as a means to evaluate satellite-based calcification algorithms. Here, we use surface carbonate system measurements collected from 2002 to 2016 for the Drake Passage Time-series (DPT) to quantify rates of calcification across the Antarctic Circumpolar Current. We also use vertical PA profiles collected during two cruises across the Drake Passage in March 2006 and September 2009 to estimate the calcium carbonate to organic carbon export ratio. We find geochemical evidence for calcification both north and south of the APF with the highest rates observed north of the APF. Calcification estimates from the DPT are compared to satellite-based estimates and estimates based on hydrographic data from other regions around the Southern Ocean.
Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.
2008-11-06
This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discretemore » correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.« less
Bruise chromophore concentrations over time
NASA Astrophysics Data System (ADS)
Duckworth, Mark G.; Caspall, Jayme J.; Mappus, Rudolph L., IV; Kong, Linghua; Yi, Dingrong; Sprigle, Stephen H.
2008-03-01
During investigations of potential child and elder abuse, clinicians and forensic practitioners are often asked to offer opinions about the age of a bruise. A commonality between existing methods of bruise aging is analysis of bruise color or estimation of chromophore concentration. Relative chromophore concentration is an underlying factor that determines bruise color. We investigate a method of chromophore concentration estimation that can be employed in a handheld imaging spectrometer with a small number of wavelengths. The method, based on absorbance properties defined by Beer-Lambert's law, allows estimation of differential chromophore concentration between bruised and normal skin. Absorption coefficient data for each chromophore are required to make the estimation. Two different sources of this data are used in the analysis- generated using Independent Component Analysis and taken from published values. Differential concentration values over time, generated using both sources, show correlation to published models of bruise color change over time and total chromophore concentration over time.
Real time lobster posture estimation for behavior research
NASA Astrophysics Data System (ADS)
Yan, Sheng; Alfredsen, Jo Arve
2017-02-01
In animal behavior research, the main task of observing the behavior of an animal is usually done manually. The measurement of the trajectory of an animal and its real-time posture description is often omitted due to the lack of automatic computer vision tools. Even though there are many publications for pose estimation, few are efficient enough to apply in real-time or can be used without the machine learning algorithm to train a classifier from mass samples. In this paper, we propose a novel strategy for the real-time lobster posture estimation to overcome those difficulties. In our proposed algorithm, we use the Gaussian mixture model (GMM) for lobster segmentation. Then the posture estimation is based on the distance transform and skeleton calculated from the segmentation. We tested the algorithm on a serials lobster videos in different size and lighting conditions. The results show that our proposed algorithm is efficient and robust under various conditions.
NASA Astrophysics Data System (ADS)
Zakharchenko, V. D.; Kovalenko, I. G.
2014-05-01
A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
Effects of Blood-Alcohol Concentration (BAC) Feedback on BAC Estimates Over Time
ERIC Educational Resources Information Center
Bullers, Susan; Ennis, Melissa
2006-01-01
This study examines the effects of self-tested blood alcohol concentration (BAC) feedback, from personal hand-held breathalyzers, on the accuracy of BAC estimation. Using an e-mail prompted web-based questionnaire, 19 participants were asked to report both BAC estimates and subsequently measured BAC levels over the course of 27 days. Results from…
Spread-Spectrum Carrier Estimation With Unknown Doppler Shift
NASA Technical Reports Server (NTRS)
DeLeon, Phillip L.; Scaife, Bradley J.
1998-01-01
We present a method for the frequency estimation of a BPSK modulated, spread-spectrum carrier with unknown Doppler shift. The approach relies on a classic periodogram in conjunction with a spectral matched filter. Simulation results indicate accurate carrier estimation with processing gains near 40. A DSP-based prototype has been implemented for real-time carrier estimation for use in New Mexico State University's proposal for NASA's Demand Assignment Multiple Access service.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
Comprehensive seismic monitoring of the Cascadia megathrust with real-time GPS
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C. W.; Webb, F.
2013-12-01
We have developed a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone based on 1- and 5-second point position estimates computed within the ITRF08 reference frame. A Kalman filter stream editor that uses a geometry-free combination of phase and range observables to speed convergence while also producing independent estimation of carrier phase biases and ionosphere delay pre-cleans raw satellite measurements. These are then analyzed with GIPSY-OASIS using satellite clock and orbit corrections streamed continuously from the International GNSS Service (IGS) and the German Aerospace Center (DLR). The resulting RMS position scatter is less than 3 cm, and typical latencies are under 2 seconds. Currently 31 coastal Washington, Oregon, and northern California stations from the combined PANGA and PBO networks are analyzed. We are now ramping up to include all of the remaining 400+ stations currently operating throughout the Cascadia subduction zone, all of which are high-rate and telemetered in real-time to CWU. These receivers span the M9 megathrust, M7 crustal faults beneath population centers, several active Cascades volcanoes, and a host of other hazard sources. To use the point position streams for seismic monitoring, we have developed an inter-process client communication package that captures, buffers and re-broadcasts real-time positions and covariances to a variety of seismic estimation routines running on distributed hardware. An aggregator ingests, re-streams and can rebroadcast up to 24 hours of point-positions and resultant seismic estimates derived from the point positions to application clients distributed across web. A suite of seismic monitoring applications has also been written, which includes position time series analysis, instantaneous displacement vectors, and peak ground displacement contouring and mapping. We have also implemented a continuous estimation of finite-fault slip along the Cascadia megathrust using a NIF-type approach. This currently operates on the terrestrial GPS data streams, but could readily be expanded to use real-time offshore geodetic measurements as well. The continuous slip distributions are used in turn to compute tsunami excitation and, when convolved with pre-computed, hydrodynamic Green functions calculated using the COMCOT tsunami modeling software, run-up estimates for the entire Cascadia coastal margin. Finally, a suite of data visualization tools has been written to allow interaction with the real-time position streams and seismic estimates based on them, including time series plotting, instantaneous offset vectors, peak ground deformation contouring, finite-fault inversions, and tsunami run-up. This suite is currently bundled within a single client written in JAVA, called ';GPS Cockpit,' which is available for download.
Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin
2017-08-21
The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
NASA Astrophysics Data System (ADS)
Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun
2018-05-01
For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.
Latorre, Jorge; Llorens, Roberto; Colomer, Carolina; Alcañiz, Mariano
2018-04-27
Different studies have analyzed the potential of the off-the-shelf Microsoft Kinect, in its different versions, to estimate spatiotemporal gait parameters as a portable markerless low-cost alternative to laboratory grade systems. However, variability in populations, measures, and methodologies prevents accurate comparison of the results. The objective of this study was to determine and compare the reliability of the existing Kinect-based methods to estimate spatiotemporal gait parameters in healthy and post-stroke adults. Forty-five healthy individuals and thirty-eight stroke survivors participated in this study. Participants walked five meters at a comfortable speed and their spatiotemporal gait parameters were estimated from the data retrieved by a Kinect v2, using the most common methods in the literature, and by visual inspection of the videotaped performance. Errors between both estimations were computed. For both healthy and post-stroke participants, highest accuracy was obtained when using the speed of the ankles to estimate gait speed (3.6-5.5 cm/s), stride length (2.5-5.5 cm), and stride time (about 45 ms), and when using the distance between the sacrum and the ankles and toes to estimate double support time (about 65 ms) and swing time (60-90 ms). Although the accuracy of these methods is limited, these measures could occasionally complement traditional tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.
2011-10-01
Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.
Estimating the time for dissolution of spent fuel exposed to unlimited water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leider, H.R.; Nguyen, S.N.; Stout, R.B.
1991-12-01
The release of radionuclides from spent fuel cannot be precisely predicted at this point because a satisfactory dissolution model based on specific chemical processes is not yet available. However, preliminary results on the dissolution rate of UO{sub 2} and spent fuel as a function of temperature and water composition have recently been reported. This information, together with data on fragment size distribution of spent fuel, are used to estimate the dissolution response of spent fuel in excess flowing water within the framework of a simple model. In this model, the reaction/dissolution front advances linearly with time and geometry is preserved.more » This also estimates the dissolution rate of the bulk of the fission products and higher actinides, which are uniformly distributed in the UO{sub 2} matrix and are presumed to dissolve congruently. We have used a fuel fragment distribution actually observed to calculate the time for total dissolution of spent fuel. A worst-case estimate was also made using the initial (maximum) rate of dissolution to predict the total dissolution time. The time for total dissolution of centimeter size particles is estimated to be 5.5 {times} 10{sup 4} years at 25{degrees}C.« less
Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi
2018-02-01
The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Park, Juneyoung; Abdel-Aty, Mohamed; Lee, Jaeyoung
2016-09-01
Although many researchers have estimated the crash modification factors (CMFs) for specific treatments (or countermeasures), there is a lack of prior studies that have explored the variation of CMFs. Thus, the main objectives of this study are: (a) to estimate CMFs for the installation of different types of roadside barriers, and (b) to determine the changes of safety effects for different crash types, severities, and conditions. Two observational before-after analyses (i.e. empirical Bayes (EB) and full Bayes (FB) approaches) were utilized in this study to estimate CMFs. To consider the variation of safety effects based on different vehicle, driver, weather, and time of day information, the crashes were categorized based on vehicle size (passenger and heavy), driver age (young, middle, and old), weather condition (normal and rain), and time difference (day time and night time). The results show that the addition of roadside barriers is safety effective in reducing severe crashes for all types and run-off roadway (ROR) crashes. On the other hand, it was found that roadside barriers tend to increase all types of crashes for all severities. The results indicate that the treatment might increase the total number of crashes but it might be helpful in reducing injury and severe crashes. In this study, the variation of CMFs was determined for ROR crashes based on the different vehicle, driver, weather, and time information. Based on the findings from this study, the variation of CMFs can enhance the reliability of CMFs for different roadway conditions in decision making process. Also, it can be recommended to identify the safety effects of specific treatments for different crash types and severity levels with consideration of the different vehicle, driver, weather, and time of day information. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
The impact of the rate prior on Bayesian estimation of divergence times with multiple Loci.
Dos Reis, Mario; Zhu, Tianqi; Yang, Ziheng
2014-07-01
Bayesian methods provide a powerful way to estimate species divergence times by combining information from molecular sequences with information from the fossil record. With the explosive increase of genomic data, divergence time estimation increasingly uses data of multiple loci (genes or site partitions). Widely used computer programs to estimate divergence times use independent and identically distributed (i.i.d.) priors on the substitution rates for different loci. The i.i.d. prior is problematic. As the number of loci (L) increases, the prior variance of the average rate across all loci goes to zero at the rate 1/L. As a consequence, the rate prior dominates posterior time estimates when many loci are analyzed, and if the rate prior is misspecified, the estimated divergence times will converge to wrong values with very narrow credibility intervals. Here we develop a new prior on the locus rates based on the Dirichlet distribution that corrects the problematic behavior of the i.i.d. prior. We use computer simulation and real data analysis to highlight the differences between the old and new priors. For a dataset for six primate species, we show that with the old i.i.d. prior, if the prior rate is too high (or too low), the estimated divergence times are too young (or too old), outside the bounds imposed by the fossil calibrations. In contrast, with the new Dirichlet prior, posterior time estimates are insensitive to the rate prior and are compatible with the fossil calibrations. We re-analyzed a phylogenomic data set of 36 mammal species and show that using many fossil calibrations can alleviate the adverse impact of a misspecified rate prior to some extent. We recommend the use of the new Dirichlet prior in Bayesian divergence time estimation. [Bayesian inference, divergence time, relaxed clock, rate prior, partition analysis.]. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.
2013-01-01
Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303
Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan
2014-03-01
An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuejun; Tang, Qiuhong; Liu, Xingcai
Real-time monitoring and predicting drought development with several months in advance is of critical importance for drought risk adaptation and mitigation. In this paper, we present a drought monitoring and seasonal forecasting framework based on the Variable Infiltration Capacity (VIC) hydrologic model over Southwest China (SW). The satellite precipitation data are used to force VIC model for near real-time estimate of land surface hydrologic conditions. As initialized with satellite-aided monitoring, the climate model-based forecast (CFSv2_VIC) and ensemble streamflow prediction (ESP)-based forecast (ESP_VIC) are both performed and evaluated through their ability in reproducing the evolution of the 2009/2010 severe drought overmore » SW. The results show that the satellite-aided monitoring is able to provide reasonable estimate of forecast initial conditions (ICs) in a real-time manner. Both of CFSv2_VIC and ESP_VIC exhibit comparable performance against the observation-based estimates for the first month, whereas the predictive skill largely drops beyond 1-month. Compared to ESP_VIC, CFSv2_VIC shows better performance as indicated by the smaller ensemble range. This study highlights the value of this operational framework in generating near real-time ICs and giving a reliable prediction with 1-month ahead, which has great implications for drought risk assessment, preparation and relief.« less
A Unified Estimation Framework for State-Related Changes in Effective Brain Connectivity.
Samdin, S Balqis; Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain
2017-04-01
This paper addresses the critical problem of estimating time-evolving effective brain connectivity. Current approaches based on sliding window analysis or time-varying coefficient models do not simultaneously capture both slow and abrupt changes in causal interactions between different brain regions. To overcome these limitations, we develop a unified framework based on a switching vector autoregressive (SVAR) model. Here, the dynamic connectivity regimes are uniquely characterized by distinct vector autoregressive (VAR) processes and allowed to switch between quasi-stationary brain states. The state evolution and the associated directed dependencies are defined by a Markov process and the SVAR parameters. We develop a three-stage estimation algorithm for the SVAR model: 1) feature extraction using time-varying VAR (TV-VAR) coefficients, 2) preliminary regime identification via clustering of the TV-VAR coefficients, 3) refined regime segmentation by Kalman smoothing and parameter estimation via expectation-maximization algorithm under a state-space formulation, using initial estimates from the previous two stages. The proposed framework is adaptive to state-related changes and gives reliable estimates of effective connectivity. Simulation results show that our method provides accurate regime change-point detection and connectivity estimates. In real applications to brain signals, the approach was able to capture directed connectivity state changes in functional magnetic resonance imaging data linked with changes in stimulus conditions, and in epileptic electroencephalograms, differentiating ictal from nonictal periods. The proposed framework accurately identifies state-dependent changes in brain network and provides estimates of connectivity strength and directionality. The proposed approach is useful in neuroscience studies that investigate the dynamics of underlying brain states.
Role of a plausible nuisance contributor in the declining obesity-mortality risks over time.
Mehta, Tapan; Pajewski, Nicholas M; Keith, Scott W; Fontaine, Kevin; Allison, David B
2016-12-15
Recent analyses of epidemiological data including the National Health and Nutrition Examination Survey (NHANES) have suggested that the harmful effects of obesity may have decreased over calendar time. The shifting BMI distribution over time coupled with the application of fixed broad BMI categories in these analyses could be a plausible "nuisance contributor" to this observed change in the obesity-associated mortality over calendar time. To evaluate the extent to which observed temporal changes in the obesity-mortality association may be due to a shifting population distribution for body mass index (BMI), coupled with analyses based on static, broad BMI categories. Simulations were conducted using data from NHANES I and III linked with mortality data. Data from NHANES I were used to fit a "true" model treating BMI as a continuous variable. Coefficients estimated from this model were used to simulate mortality for participants in NHANES III. Hence, the population-level association between BMI and mortality in NHANES III was fixed to be identical to the association estimated in NHANES I. Hazard ratios (HRs) for obesity categories based on BMI for NHANES III with simulated mortality data were compared to the corresponding estimated HRs from NHANES I. Change in hazard ratios for simulated data in NHANES III compared to observed estimates from NHANES I. On average, hazard ratios for NHANES III based on simulated mortality data were 29.3% lower than the estimates from NHANES I using observed mortality follow-up. This reduction accounted for roughly three-fourths of the apparent decrease in the obesity-mortality association observed in a previous analysis of these data. Some of the apparent diminution of the association between obesity and mortality may be an artifact of treating BMI as a categorical variable. Copyright © 2016. Published by Elsevier Inc.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang
2014-01-01
The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...
Knowledge base about earthquakes as a tool to minimize strong events consequences
NASA Astrophysics Data System (ADS)
Frolova, Nina; Bonnin, Jean; Larionov, Valery; Ugarov, Alexander; Kijko, Andrzej
2017-04-01
The paper describes the structure and content of the knowledge base on physical and socio-economical consequences of damaging earthquakes, which may be used for calibration of near real-time loss assessment systems based on simulation models for shaking intensity, damage to buildings and casualties estimates. Such calibration allows to compensate some factors which influence on reliability of expected damage and loss assessment in "emergency" mode. The knowledge base contains the description of past earthquakes' consequences for the area under study. It also includes the current distribution of built environment and population at the time of event occurrence. Computer simulation of the recorded in knowledge base events allow to determine the sets of regional calibration coefficients, including rating of seismological surveys, peculiarities of shaking intensity attenuation and changes in building stock and population distribution, in order to provide minimum error of damaging earthquakes loss estimations in "emergency" mode. References 1. Larionov, V., Frolova, N: Peculiarities of seismic vulnerability estimations. In: Natural Hazards in Russia, volume 6: Natural Risks Assessment and Management, Publishing House "Kruk", Moscow, 120-131, 2003. 2. Frolova, N., Larionov, V., Bonnin, J.: Data Bases Used In Worlwide Systems For Earthquake Loss Estimation In Emergency Mode: Wenchuan Earthquake. In Proc. TIEMS2010 Conference, Beijing, China, 2010. 3. Frolova N. I., Larionov V. I., Bonnin J., Sushchev S. P., Ugarov A. N., Kozlov M. A. Loss Caused by Earthquakes: Rapid Estimates. Natural Hazards Journal of the International Society for the Prevention and Mitigation of Natural Hazards, vol.84, ISSN 0921-030, Nat Hazards DOI 10.1007/s11069-016-2653
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Li, Zhiwei; Zhao, Rong; Hu, Jun; Wen, Lianxing; Feng, Guangcai; Zhang, Zeyu; Wang, Qijie
2015-01-01
This paper presents a novel method to estimate active layer thickness (ALT) over permafrost based on InSAR (Interferometric Synthetic Aperture Radar) observation and the heat transfer model of soils. The time lags between the periodic feature of InSAR-observed surface deformation over permafrost and the meteorologically recorded temperatures are assumed to be the time intervals that the temperature maximum to diffuse from the ground surface downward to the bottom of the active layer. By exploiting the time lags and the one-dimensional heat transfer model of soils, we estimate the ALTs. Using the frozen soil region in southern Qinghai-Tibet Plateau (QTP) as examples, we provided a conceptual demonstration of the estimation of the InSAR pixel-wise ALTs. In the case study, the ALTs are ranging from 1.02 to 3.14 m and with an average of 1.95 m. The results are compatible with those sparse ALT observations/estimations by traditional methods, while with extraordinary high spatial resolution at pixel level (~40 meter). The presented method is simple, and can potentially be used for deriving high-resolution ALTs in other remote areas similar to QTP, where only sparse observations are available now. PMID:26480892
Li, Zhiwei; Zhao, Rong; Hu, Jun; Wen, Lianxing; Feng, Guangcai; Zhang, Zeyu; Wang, Qijie
2015-10-20
This paper presents a novel method to estimate active layer thickness (ALT) over permafrost based on InSAR (Interferometric Synthetic Aperture Radar) observation and the heat transfer model of soils. The time lags between the periodic feature of InSAR-observed surface deformation over permafrost and the meteorologically recorded temperatures are assumed to be the time intervals that the temperature maximum to diffuse from the ground surface downward to the bottom of the active layer. By exploiting the time lags and the one-dimensional heat transfer model of soils, we estimate the ALTs. Using the frozen soil region in southern Qinghai-Tibet Plateau (QTP) as examples, we provided a conceptual demonstration of the estimation of the InSAR pixel-wise ALTs. In the case study, the ALTs are ranging from 1.02 to 3.14 m and with an average of 1.95 m. The results are compatible with those sparse ALT observations/estimations by traditional methods, while with extraordinary high spatial resolution at pixel level (~40 meter). The presented method is simple, and can potentially be used for deriving high-resolution ALTs in other remote areas similar to QTP, where only sparse observations are available now.
The ARM Best Estimate Station-based Surface (ARMBESTNS) Data set
Qi,Tang; Xie,Shaocheng
2015-08-06
The ARM Best Estimate Station-based Surface (ARMBESTNS) data set merges together key surface measurements from the Southern Great Plains (SGP) sites. It is a twin data product of the ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set. Unlike the 2DGRID data set, the STNS data are reported at the original site locations and show the original information, except for the interpolation over time. Therefore, users have the flexibility to process the data with the approach more suitable for their applications.
Position Estimation for Switched Reluctance Motor Based on the Single Threshold Angle
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Pang; Yu, Yue
2017-05-01
This paper presents a position estimate model of switched reluctance motor based on the single threshold angle. In view of the relationship of between the inductance and rotor position, the position is estimated by comparing the real-time dynamic flux linkage with the threshold angle position flux linkage (7.5° threshold angle, 12/8SRM). The sensorless model is built by Maltab/Simulink, the simulation are implemented under the steady state and transient state different condition, and verified its validity and feasibility of the method..
Chan, Aaron C.; Srinivasan, Vivek J.
2013-01-01
In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Instrumental variables estimates of peer effects in social networks.
An, Weihua
2015-03-01
Estimating peer effects with observational data is very difficult because of contextual confounding, peer selection, simultaneity bias, and measurement error, etc. In this paper, I show that instrumental variables (IVs) can help to address these problems in order to provide causal estimates of peer effects. Based on data collected from over 4000 students in six middle schools in China, I use the IV methods to estimate peer effects on smoking. My design-based IV approach differs from previous ones in that it helps to construct potentially strong IVs and to directly test possible violation of exogeneity of the IVs. I show that measurement error in smoking can lead to both under- and imprecise estimations of peer effects. Based on a refined measure of smoking, I find consistent evidence for peer effects on smoking. If a student's best friend smoked within the past 30 days, the student was about one fifth (as indicated by the OLS estimate) or 40 percentage points (as indicated by the IV estimate) more likely to smoke in the same time period. The findings are robust to a variety of robustness checks. I also show that sharing cigarettes may be a mechanism for peer effects on smoking. A 10% increase in the number of cigarettes smoked by a student's best friend is associated with about 4% increase in the number of cigarettes smoked by the student in the same time period. Copyright © 2014 Elsevier Inc. All rights reserved.
Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-01-01
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868
Motion field estimation for a dynamic scene using a 3D LiDAR.
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-09-09
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.
Estimating patient time costs associated with colorectal cancer care.
Yabroff, K Robin; Warren, Joan L; Knopf, Kevin; Davis, William W; Brown, Martin L
2005-07-01
Nonmedical costs of care, such as patient time associated with travel to, waiting for, and seeking medical care, are rarely measured systematically with population-based data. The purpose of this study was to estimate patient time costs associated with colorectal cancer care. We identified categories of key medical services for colorectal cancer care and then estimated patient time associated with each service category using data from national surveys. To estimate average service frequencies for each service category, we used a nested case control design and SEER-Medicare data. Estimates were calculated by phase of care for cases and controls, using data from 1995 to 1998. Average service frequencies were then combined with estimates of patient time for each category of service, and the value of patient time assigned. Net patient time costs were calculated for each service category, summarized by phase of care, and compared with previously reported net direct costs of colorectal cancer care. Net patient time costs for the 3 phases of colorectal cancer care averaged dollar 4592 (95% confidence interval [CI] dollar 4427-4757) over the 12 months of the initial phase, dollar 2788 (95% CI dollar 2614-2963) over the 12 months of the terminal phase, and dollar 25 (95% CI: dollar 23-26) per month in the continuing phase of care. Hospitalizations accounted for more than two thirds of these estimates. Patient time costs were 19.3% of direct medical costs in the initial phase, 15.8% in the continuing phase, and 36.8% in the terminal phase of care. Patient time costs are an important component of the costs of colorectal cancer care. Application of this method to other tumor sites and inclusion of other components of the costs of medical care will be important in delineating the economic burden of cancer in the United States.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
How can streamflow and climate-landscape data be used to estimate baseflow mean response time?
NASA Astrophysics Data System (ADS)
Zhang, Runrun; Chen, Xi; Zhang, Zhicai; Soulsby, Chris; Gao, Man
2018-02-01
Mean response time (MRT) is a metric describing the propagation of catchment hydraulic behavior that reflects both hydro-climatic conditions and catchment characteristics. To provide a comprehensive understanding of catchment response over a longer-time scale for hydraulic processes, the MRT function for baseflow generation was derived using an instantaneous unit hydrograph (IUH) model that describes the subsurface response to effective rainfall inputs. IUH parameters were estimated based on the "match test" between the autocorrelation function (ACFs) derived from the filtered base flow time series and from the IUH parameters, under the GLUE framework. Regionalization of MRT was conducted using estimates and hydroclimate-landscape indices in 22 sub-basins of the Jinghe River Basin (JRB) in the Loess Plateau of northwest China. Results indicate there is strong equifinality in determination of the best parameter sets but the median values of the MRT estimates are relatively stable in the acceptable range of the parameters. MRTs vary markedly over the studied sub-basins, ranging from tens of days to more than a year. Climate, topography and geomorphology were identified as three first-order controls on recharge-baseflow response processes. Human activities involving the cultivation of permanent crops may elongate the baseflow MRT and hence increase the dynamic storage. Cross validation suggests the model can be used to estimate MRTs in ungauged catchments in similar regions of throughout the Loess Plateau. The proposed method provides a systematic approach for MRT estimation and regionalization in terms of hydroclimate and catchment characteristics, which is helpful in the sustainable water resources utilization and ecological protection in the Loess Plateau.
Scheurer, Eva; Ith, Michael; Dietrich, Daniel; Kreis, Roland; Hüsler, Jürg; Dirnhofer, Richard; Boesch, Chris
2005-05-01
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright 2004 John Wiley & Sons, Ltd.
Real-Time Personalized Monitoring to Estimate Occupational Heat Stress in Ambient Assisted Working.
Pancardo, Pablo; Acosta, Francisco D; Hernández-Nolasco, José Adán; Wister, Miguel A; López-de-Ipiña, Diego
2015-07-13
Ambient Assisted Working (AAW) is a discipline aiming to provide comfort and safety in the workplace through customization and technology. Workers' comfort may be compromised in many labor situations, including those depending on environmental conditions, like extremely hot weather conduces to heat stress. Occupational heat stress (OHS) happens when a worker is in an uninterrupted physical activity and in a hot environment. OHS can produce strain on the body, which leads to discomfort and eventually to heat illness and even death. Related ISO standards contain methods to estimate OHS and to ensure the safety and health of workers, but they are subjective, impersonal, performed a posteriori and even invasive. This paper focuses on the design and development of real-time personalized monitoring for a more effective and objective estimation of OHS, taking into account the individual user profile, fusing data from environmental and unobtrusive body sensors. Formulas employed in this work were taken from different domains and joined in the method that we propose. It is based on calculations that enable continuous surveillance of physical activity performance in a comfortable and healthy manner. In this proposal, we found that OHS can be estimated by satisfying the following criteria: objective, personalized, in situ, in real time, just in time and in an unobtrusive way. This enables timely notice for workers to make decisions based on objective information to control OHS.
Real-Time Personalized Monitoring to Estimate Occupational Heat Stress in Ambient Assisted Working
Pancardo, Pablo; Acosta, Francisco D.; Hernández-Nolasco, José Adán; Wister, Miguel A.; López-de-Ipiña, Diego
2015-01-01
Ambient Assisted Working (AAW) is a discipline aiming to provide comfort and safety in the workplace through customization and technology. Workers' comfort may be compromised in many labor situations, including those depending on environmental conditions, like extremely hot weather conduces to heat stress. Occupational heat stress (OHS) happens when a worker is in an uninterrupted physical activity and in a hot environment. OHS can produce strain on the body, which leads to discomfort and eventually to heat illness and even death. Related ISO standards contain methods to estimate OHS and to ensure the safety and health of workers, but they are subjective, impersonal, performed a posteriori and even invasive. This paper focuses on the design and development of real-time personalized monitoring for a more effective and objective estimation of OHS, taking into account the individual user profile, fusing data from environmental and unobtrusive body sensors. Formulas employed in this work were taken from different domains and joined in the method that we propose. It is based on calculations that enable continuous surveillance of physical activity performance in a comfortable and healthy manner. In this proposal, we found that OHS can be estimated by satisfying the following criteria: objective, personalized, in situ, in real time, just in time and in an unobtrusive way. This enables timely notice for workers to make decisions based on objective information to control OHS. PMID:26184218
GPS-based Microenvironment Tracker (MicroTrac) Model to ...
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared to 24 h diary data from 7 participants on workdays and 2 participants on nonworkdays, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time-location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize
Laforest, Richard; Karimi, Morvarid; Moerlein, Stephen M; Xu, Jinbin; Flores, Hubert P; Bognar, Christopher; Li, Aixiao; Mach, Robert H; Perlmutter, Joel S; Tu, Zhude
2016-01-01
[ 18 F]FluorTriopride ([ 18 F]FTP) is a dopamine D 3 -receptor preferring radioligand with potential for investigation of neuropsychiatric disorders including Parkinson disease, dystonia and schizophrenia. Here we estimate human radiation dosimetry for [ 18 F]FTP based on the ex-vivo biodistribution in rodents and in vivo distribution in nonhuman primates. Biodistribution data were generated using male and female Sprague-Dawley rats injected with ~370 KBq of [ 18 F]FTP and euthanized at 5, 30, 60, 120, and 240 min. Organs of interest were dissected, weighed and assayed for radioactivity content. PET imaging studies were performed in two male and one female macaque fascicularis administered 143-190 MBq of [ 18 F]FTP and scanned whole-body in sequential sections. Organ residence times were calculated based on organ time activity curves (TAC) created from regions of Interest. OLINDA/EXM 1.1 was used to estimate human radiation dosimetry based on scaled organ residence times. In the rodent, the highest absorbed radiation dose was the upper large intestines (0.32-0.49 mGy/MBq), with an effective dose of 0.07 mSv/MBq in males and 0.1 mSv/MBq in females. For the nonhuman primate, however, the gallbladder wall was the critical organ (1.81 mGy/MBq), and the effective dose was 0.02 mSv/MBq. The species discrepancy in dosimetry estimates for [ 18 F]FTP based on rat and primate data can be attributed to the slower transit of tracer through the hepatobiliary track of the primate compared to the rat, which lacks a gallbladder. Out findings demonstrate that the nonhuman primate model is more appropriate model for estimating human absorbed radiation dosimetry when hepatobiliary excretion plays a major role in radiotracer elimination.
FISM 2.0: Improved Spectral Range, Resolution, and Accuracy
NASA Technical Reports Server (NTRS)
Chamberlin, Phillip C.
2012-01-01
The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.
Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe
2013-02-01
The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).
Time-driven activity-based costing.
Kaplan, Robert S; Anderson, Steven R
2004-11-01
In the classroom, activity-based costing (ABC) looks like a great way to manage a company's limited resources. But executives who have tried to implement ABC in their organizations on any significant scale have often abandoned the attempt in the face of rising costs and employee irritation. They should try again, because a new approach sidesteps the difficulties associated with large-scale ABC implementation. In the revised model, managers estimate the resource demands imposed by each transaction, product, or customer, rather than relying on time-consuming and costly employee surveys. This method is simpler since it requires, for each group of resources, estimates of only two parameters: how much it costs per time unit to supply resources to the business's activities (the total overhead expenditure of a department divided by the total number of minutes of employee time available) and how much time it takes to carry out one unit of each kind of activity (as estimated or observed by the manager). This approach also overcomes a serious technical problem associated with employee surveys: the fact that, when asked to estimate time spent on activities, employees invariably report percentages that add up to 100. Under the new system, managers take into account time that is idle or unused. Armed with the data, managers then construct time equations, a new feature that enables the model to reflect the complexity of real-world operations by showing how specific order, customer, and activity characteristics cause processing times to vary. This Tool Kit uses concrete examples to demonstrate how managers can obtain meaningful cost and profitability information, quickly and inexpensively. Rather than endlessly updating and maintaining ABC data,they can now spend their time addressing the deficiencies the model reveals: inefficient processes, unprofitable products and customers, and excess capacity.
Scaling in Free-Swimming Fish and Implications for Measuring Size-at-Time in the Wild
Broell, Franziska; Taggart, Christopher T.
2015-01-01
This study was motivated by the need to measure size-at-age, and thus growth rate, in fish in the wild. We postulated that this could be achieved using accelerometer tags based first on early isometric scaling models that hypothesize that similar animals should move at the same speed with a stroke frequency that scales with length-1, and second on observations that the speed of primarily air-breathing free-swimming animals, presumably swimming ‘efficiently’, is independent of size, confirming that stroke frequency scales as length-1. However, such scaling relations between size and swimming parameters for fish remain mostly theoretical. Based on free-swimming saithe and sturgeon tagged with accelerometers, we introduce a species-specific scaling relationship between dominant tail beat frequency (TBF) and fork length. Dominant TBF was proportional to length-1 (r2 = 0.73, n = 40), and estimated swimming speed within species was independent of length. Similar scaling relations accrued in relation to body mass-0.29. We demonstrate that the dominant TBF can be used to estimate size-at-time and that accelerometer tags with onboard processing may be able to provide size-at-time estimates among free-swimming fish and thus the estimation of growth rate (change in size-at-time) in the wild. PMID:26673777
Estimating the age-specific duration of herpes zoster vaccine protection: a matter of model choice?
Bilcke, Joke; Ogunjimi, Benson; Hulstaert, Frank; Van Damme, Pierre; Hens, Niel; Beutels, Philippe
2012-04-05
The estimation of herpes zoster (HZ) vaccine efficacy by time since vaccination and age at vaccination is crucial to assess the effectiveness and cost-effectiveness of HZ vaccination. Published estimates for the duration of protection from the vaccine diverge substantially, although based on data from the same trial for a follow-up period of 5 years. Different models were used to obtain these estimates, but it is unclear which of these models is most appropriate (if any). Only one study estimated vaccine efficacy by age at vaccination and time since vaccination combined. Recently, data became available from the same trial for a follow-up period of 7 years. We aim to elaborate on estimating HZ vaccine efficacy (1) by estimating it as a function of time since vaccination and age at vaccination, (2) by comparing the fits of a range of models, and (3) by fitting these models on data for a follow-up period of 5 and 7 years. Although the models' fit to data are very comparable, they differ substantially in how they estimate vaccine efficacy to change as a function of time since vaccination and age at vaccination. An accurate estimation of HZ vaccine efficacy by time since vaccination and age at vaccination is hampered by the lack of insight in the biological processes underlying HZ vaccine protection, and by the fact that such data are currently not available in sufficient detail. Uncertainty about the choice of model to estimate this important parameter should be acknowledged in cost-effectiveness analyses. Copyright © 2011 Elsevier Ltd. All rights reserved.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
2015-03-13
A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels
NASA Astrophysics Data System (ADS)
Fusco, Tilde; Petrella, Angelo; Tanda, Mario
2009-12-01
The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
New agreement measures based on survival processes
Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.
2013-01-01
Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617
Two cloud-based cues for estimating scene structure and camera calibration.
Jacobs, Nathan; Abrams, Austin; Pless, Robert
2013-10-01
We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.
Mechanisms of recharge in a fractured porous rock aquifer in a semi-arid region
NASA Astrophysics Data System (ADS)
Manna, Ferdinando; Walton, Kenneth M.; Cherry, John A.; Parker, Beth L.
2017-12-01
Eleven porewater profiles in rock core from an upland exposed sandstone vadose zone in southern California, with thickness varying between 10 and 62 m, were analyzed for chloride (Cl) concentration to examine recharge mechanisms, estimate travel times in the vadose zone, assess spatial and temporal variability of recharge, and determine effects of land use changes on recharge. As a function of their location and the local terrain, the profiles were classified into four groups reflecting the range of site characteristics. Century- to millennium-average recharge varied from 4 to 23 mm y-1, corresponding to <1-5% of the average annual precipitation (451 mm over the 1878-2016 period). Based on the different average Cl concentrations in the vadose zone and in groundwater, the contribution of diffuse flow (estimated at 80%) and preferential flow (20%) to the total recharge was quantified. This model of dual porosity recharge was tested by simulating transient Cl transport along a physically based narrow column using a discrete fracture-matrix numerical model. Using a new approach based on partitioning both water and Cl between matrix and fracture flow, porewater was dated and vertical displacement rates estimated to range in the sandstone matrix from 3 to 19 cm y-1. Moreover, the temporal variability of recharge was estimated and, along each profile, past recharge rates calculated based on the sequence of Cl concentrations in the vadose zone. Recharge rates increased at specific times coincident with historical changes in land use. The consistency between the timing of land use modifications and changes in Cl concentration and the match between observed and simulated Cl concentration values in the vadose zone provide confidence in porewater age estimates, travel times, recharge estimates, and reconstruction of recharge histories. This study represents an advancement of the application of the chloride mass balance method to simultaneously determine recharge mechanisms and reconstruct location-specific recharge histories in fractured porous rock aquifers. The proposed approach can be applied worldwide at sites with similar climatic and geologic characteristics.
CrowdWater - Can people observe what models need?
NASA Astrophysics Data System (ADS)
van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.
2017-12-01
CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Robust range estimation with a monocular camera for vision-based forward collision warning system.
Park, Ki-Yeong; Hwang, Sun-Young
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.
Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System
2014-01-01
We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344
State Estimation for Tensegrity Robots
NASA Technical Reports Server (NTRS)
Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas
2016-01-01
Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.
RESULTS OF COMPUTATIONS MADE FOR DASA-USNRDL FALLOUT SYMPOSIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Read, R.; Wagner, L.; Moorehead, E.
1962-11-01
The regression techniques introduced by the Civil Defense Research Project for estimating fallout particle deposition coordinates, their standard ellipses, and isointensity contours have been applied to some of the homework problems assigned for the DASA-USNRDL Fallout Symposium. The results are reported and the estimates are contrasted with estimates based on the assumption that winds are invariant with time. (auth).
Measuring Forest Area Loss Over Time Using FIA Plots and Satellite Imagery
Michael L. Hoppus; Andrew J. Lister
2005-01-01
How accurately can FIA plots, scattered at 1 per 6,000 acres, identify often rare forest land loss, estimated at less than 1 percent per year in the Northeast? Here we explore this question mathematically, empirically, and by comparing FIA plot estimates of forest change with satellite image based maps of forest loss. The mathematical probability of exactly estimating...
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Estimation of community-level influenza-associated illness in a low resource rural setting in India.
Saha, Siddhartha; Gupta, Vivek; Dawood, Fatimah S; Broor, Shobha; Lafond, Kathryn E; Chadha, Mandeep S; Rai, Sanjay K; Krishnan, Anand
2018-01-01
To estimate rates of community-level influenza-like-illness (ILI) and influenza-associated ILI in rural north India. During 2011, we conducted household-based healthcare utilization surveys (HUS) for any acute medical illness (AMI) in preceding 14days among residents of 28villages of Ballabgarh, in north India. Concurrently, we conducted clinic-based surveillance (CBS) in the area for AMI episodes with illness onset ≤3days and collected nasal and throat swabs for influenza virus testing using real-time polymerase chain reaction. Retrospectively, we applied ILI case definition (measured/reported fever and cough) to HUS and CBS data. We attributed 14days of risk-time per person surveyed in HUS and estimated community ILI rate by dividing the number of ILI cases in HUS by total risk-time. We used CBS data on influenza positivity and applied it to HUS-based community ILI rates by age, month, and clinic type, to estimate the community influenza-associated ILI rates. The HUS of 69,369 residents during the year generated risk-time of 3945 person-years (p-y) and identified 150 (5%, 95%CI: 4-6) ILI episodes (38 ILI episodes/1,000 p-y; 95% CI 32-44). Among 1,372 ILI cases enrolled from clinics, 126 (9%; 95% CI 8-11) had laboratory-confirmed influenza (A (H3N2) = 72; B = 54). After adjusting for age, month, and clinic type, overall influenza-associated ILI rate was 4.8/1,000 p-y; rates were highest among children <5 years (13; 95% CI: 4-29) and persons≥60 years (11; 95%CI: 2-30). We present a novel way to use HUS and CBS data to generate estimates of community burden of influenza. Although the confidence intervals overlapped considerably, higher point estimates for burden among young children and older adults shows the utility for exploring the value of influenza vaccination among target groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.
2014-01-01
Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less
NASA Astrophysics Data System (ADS)
Sawant, S. A.; Chakraborty, M.; Suradhaniwar, S.; Adinarayana, J.; Durbha, S. S.
2016-06-01
Satellite based earth observation (EO) platforms have proved capability to spatio-temporally monitor changes on the earth's surface. Long term satellite missions have provided huge repository of optical remote sensing datasets, and United States Geological Survey (USGS) Landsat program is one of the oldest sources of optical EO datasets. This historical and near real time EO archive is a rich source of information to understand the seasonal changes in the horticultural crops. Citrus (Mandarin / Nagpur Orange) is one of the major horticultural crops cultivated in central India. Erratic behaviour of rainfall and dependency on groundwater for irrigation has wide impact on the citrus crop yield. Also, wide variations are reported in temperature and relative humidity causing early fruit onset and increase in crop water requirement. Therefore, there is need to study the crop growth stages and crop evapotranspiration at spatio-temporal scale for managing the scarce resources. In this study, an attempt has been made to understand the citrus crop growth stages using Normalized Difference Time Series (NDVI) time series data obtained from Landsat archives (http://earthexplorer.usgs.gov/). Total 388 Landsat 4, 5, 7 and 8 scenes (from year 1990 to Aug. 2015) for Worldwide Reference System (WRS) 2, path 145 and row 45 were selected to understand seasonal variations in citrus crop growth. Considering Landsat 30 meter spatial resolution to obtain homogeneous pixels with crop cover orchards larger than 2 hectare area was selected. To consider change in wavelength bandwidth (radiometric resolution) with Landsat sensors (i.e. 4, 5, 7 and 8) NDVI has been selected to obtain continuous sensor independent time series. The obtained crop growth stage information has been used to estimate citrus basal crop coefficient information (Kcb). Satellite based Kcb estimates were used with proximal agrometeorological sensing system observed relevant weather parameters for crop ET estimation. The results show that time series EO based crop growth stage estimates provide better information about geographically separated citrus orchards. Attempts are being made to estimate regional variations in citrus crop water requirement for effective irrigation planning. In future high resolution Sentinel 2 observations from European Space Agency (ESA) will be used to fill the time gaps and to get better understanding about citrus crop canopy parameters.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
Capel, P.D.; Larson, S.J.
1995-01-01
Minimizing the loss of target organic chemicals from environmental water samples between the time of sample collection and isolation is important to the integrity of an investigation. During this sample holding time, there is a potential for analyte loss through volatilization from the water to the headspace, sorption to the walls and cap of the sample bottle; and transformation through biotic and/or abiotic reactions. This paper presents a chemodynamic-based, generalized approach to estimate the most probable loss processes for individual target organic chemicals. The basic premise is that the investigator must know which loss process(es) are important for a particular analyte, based on its chemodynamic properties, when choosing the appropriate method(s) to prevent loss.
State of Charge estimation of lithium ion battery based on extended Kalman filtering algorithm
NASA Astrophysics Data System (ADS)
Yang, Fan; Feng, Yiming; Pan, Binbiao; Wan, Renzhuo; Wang, Jun
2017-08-01
Accurate estimation of state-of-charge (SOC) for lithium ion battery is crucial for real-time diagnosis and prognosis in green energy vehicles. In this paper, a state space model of the battery based on Thevenin model is adopted. The strategy of estimating state of charge (SOC) based on extended Kalman fil-ter is presented, as well as to combine with ampere-hour counting (AH) and open circuit voltage (OCV) methods. The comparison between simulation and experiments indicates that the model’s performance matches well with that of lithium ion battery. The algorithm of extended Kalman filter keeps a good accura-cy precision and less dependent on its initial value in full range of SOC, which is proved to be suitable for online SOC estimation.
2017-09-01
target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO
Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-01-01
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865
Time-series analyses of air pollution and mortality in the United States: a subsampling approach.
Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie
2013-01-01
Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.
Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique
2016-04-18
Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.
Fast human pose estimation using 3D Zernike descriptors
NASA Astrophysics Data System (ADS)
Berjón, Daniel; Morán, Francisco
2012-03-01
Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
Gone to the Beach — Using GIS to infer how people value ...
Estimating the non-market value of beaches for saltwater recreation is complex. An individual’s preference for a beach depends on their perception of beach characteristics. When choosing one beach over another, an individual balances these personal preferences with any additional costs including travel time and/or fees to access the beach. This trade-off can be used to infer how people value different beach characteristics; especially when beaches are free to the public, beach value estimates rely heavily on accurate travel times. A current case study focused on public access on Cape Cod, MA will be used to demonstrate how travel costs can be used to determine the service area of different beaches, and model expected use of those beaches based on demographics. We will describe several of the transportation networks and route services available and compare a few based on their ability to meet our specific requirements of scale and seasonal travel time accuracy. We are currently developing a recreational demand model, based on visitation data and beach characteristics, that will allow decision makers to predict the benefits of different levels of water quality improvement. An important part of that model is the time required for potential recreation participants to get to different beaches. This presentation will describe different ways to estimate travel times and the advantages/disadvantages for our particular application. It will go on to outline how freely a
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Dexter, Franklin; Ledolter, Johannes; Wachtel, Ruth E
2005-05-01
We considered the allocation of operating room (OR) time at facilities where the strategic decision had been made to increase the number of ORs. Allocation occurs in two stages: a long-term tactical stage followed by short-term operational stage. Tactical decisions, approximately 1 yr in advance, determine what specialized equipment and expertise will be needed. Tactical decisions are based on estimates of future OR workload for each subspecialty or surgeon. We show that groups of surgeons can be excluded from consideration at this tactical stage (e.g., surgeons who need intensive care beds or those with below average contribution margins per OR hour). Lower and upper limits are estimated for the future demand of OR time by the remaining surgeons. Thus, initial OR allocations can be accomplished with only partial information on future OR workload. Once the new ORs open, operational decision-making based on OR efficiency is used to fill the OR time and adjust staffing. Surgeons who were not allocated additional time at the tactical stage are provided increased OR time through operational adjustments based on their actual workload. In a case study from a tertiary hospital, future demand estimates were needed for only 15% of surgeons, illustrating the practicality of these methods for use in tactical OR allocation decisions.
Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús
2016-01-01
The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.
Ma, Chi; Varghese, Tomy
2012-04-01
Accurate cardiac deformation analysis for cardiac displacement and strain imaging over time requires Lagrangian description of deformation of myocardial tissue structures. Failure to couple the estimated displacement and strain information with the correct myocardial tissue structures will lead to erroneous result in the displacement and strain distribution over time. Lagrangian based tracking in this paper divides the tissue structure into a fixed number of pixels whose deformation is tracked over the cardiac cycle. An algorithm that utilizes a polar-grid generated between the estimated endocardial and epicardial contours for cardiac short axis images is proposed to ensure Lagrangian description of the pixels. Displacement estimates from consecutive radiofrequency frames were then mapped onto the polar grid to obtain a distribution of the actual displacement that is mapped to the polar grid over time. A finite element based canine heart model coupled with an ultrasound simulation program was used to verify this approach. Segmental analysis of the accumulated displacement and strain over a cardiac cycle demonstrate excellent agreement between the ideal result obtained directly from the finite element model and our Lagrangian approach to strain estimation. Traditional Eulerian based estimation results, on the other hand, show significant deviation from the ideal result. An in vivo comparison of the displacement and strain estimated using parasternal short axis views is also presented. Lagrangian displacement tracking using a polar grid provides accurate tracking of myocardial deformation demonstrated using both finite element and in vivo radiofrequency data acquired on a volunteer. In addition to the cardiac application, this approach can also be utilized for transverse scans of arteries, where a polar grid can be generated between the contours delineating the outer and inner wall of the vessels from the blood flowing though the vessel.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
On-Board Real-Time State and Fault Identification for Rovers
NASA Technical Reports Server (NTRS)
Washington, Richard
2000-01-01
For extended autonomous operation, rovers must identify potential faults to determine whether its execution needs to be halted or not. At the same time, rovers present particular challenges for state estimation techniques: they are subject to environmental influences that affect senior readings during normal and anomalous operation, and the sensors fluctuate rapidly both because of noise and because of the dynamics of the rover's interaction with its environment. This paper presents MAKSI, an on-board method for state estimation and fault diagnosis that is particularly appropriate for rovers. The method is based on a combination of continuous state estimation, wing Kalman filters, and discrete state estimation, wing a Markov-model representation.
NASA Astrophysics Data System (ADS)
Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
NASA Astrophysics Data System (ADS)
Zhou, Tao; Luo, Yiqi
2008-09-01
Ecosystem carbon (C) uptake is determined largely by C residence times and increases in net primary production (NPP). Therefore, evaluation of C uptake at a regional scale requires knowledge on spatial patterns of both residence times and NPP increases. In this study, we first applied an inverse modeling method to estimate spatial patterns of C residence times in the conterminous United States. Then we combined the spatial patterns of estimated residence times with a NPP change trend to assess the spatial patterns of regional C uptake in the United States. The inverse analysis was done by using the genetic algorithm and was based on 12 observed data sets of C pools and fluxes. Residence times were estimated by minimizing the total deviation between modeled and observed values. Our results showed that the estimated C residence times were highly heterogeneous over the conterminous United States, with most of the regions having values between 15 and 65 years; and the averaged C residence time was 46 years. The estimated C uptake for the whole conterminous United States was 0.15 P g C a-1. Large portions of the taken C were stored in soil for grassland and cropland (47-70%) but in plant pools for forests and woodlands (73-82%). The proportion of C uptake in soil was found to be determined primarily by C residence times and be independent of the magnitude of NPP increase. Therefore, accurate estimation of spatial patterns of C residence times is crucial for the evaluation of terrestrial ecosystem C uptake.